Hybrid Arima - LSTM code - Sentiment

The hybrid ARIMA-LSTM model is open to a variety of experimentation. For ideal performance, a balance must be reached between the levels of volatility that work best for the ARIMA and LSTM models. Using shorter MA periods that result in a non-mesokurtic distribution may achieve a better volatility balance between models.

Import Libraries

In [17]:
import pandas as pd
pd.set_option('display.max_rows', 500)
import timeit
In [18]:
!pip install -q -U keras-tuner
     |████████████████████████████████| 98 kB 3.2 MB/s 
In [19]:
import keras_tuner as kt
In [20]:
!pip install pmdarima
Collecting pmdarima
  Downloading pmdarima-1.8.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (1.4 MB)
     |████████████████████████████████| 1.4 MB 4.8 MB/s 
Requirement already satisfied: scipy>=1.3.2 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.4.1)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.24.3)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.1.0)
Requirement already satisfied: setuptools!=50.0.0,>=38.6.0 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (57.4.0)
Requirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.1.5)
Collecting statsmodels!=0.12.0,>=0.11
  Downloading statsmodels-0.13.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (9.8 MB)
     |████████████████████████████████| 9.8 MB 33.8 MB/s 
Requirement already satisfied: numpy>=1.19.3 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.19.5)
Requirement already satisfied: Cython!=0.29.18,>=0.29 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (0.29.24)
Requirement already satisfied: scikit-learn>=0.22 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.0.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.19->pmdarima) (2.8.2)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.19->pmdarima) (2018.9)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas>=0.19->pmdarima) (1.15.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.22->pmdarima) (3.0.0)
Requirement already satisfied: patsy>=0.5.2 in /usr/local/lib/python3.7/dist-packages (from statsmodels!=0.12.0,>=0.11->pmdarima) (0.5.2)
Installing collected packages: statsmodels, pmdarima
  Attempting uninstall: statsmodels
    Found existing installation: statsmodels 0.10.2
    Uninstalling statsmodels-0.10.2:
      Successfully uninstalled statsmodels-0.10.2
Successfully installed pmdarima-1.8.4 statsmodels-0.13.1
In [21]:
import pmdarima
In [22]:
url = 'https://launchpad.net/~mario-mariomedina/+archive/ubuntu/talib/+files'
!wget $url/libta-lib0_0.4.0-oneiric1_amd64.deb -qO libta.deb
!wget $url/ta-lib0-dev_0.4.0-oneiric1_amd64.deb -qO ta.deb
!dpkg -i libta.deb ta.deb
!pip install ta-lib
import talib
Selecting previously unselected package libta-lib0.
(Reading database ... 155222 files and directories currently installed.)
Preparing to unpack libta.deb ...
Unpacking libta-lib0 (0.4.0-oneiric1) ...
Selecting previously unselected package ta-lib0-dev.
Preparing to unpack ta.deb ...
Unpacking ta-lib0-dev (0.4.0-oneiric1) ...
Setting up libta-lib0 (0.4.0-oneiric1) ...
Setting up ta-lib0-dev (0.4.0-oneiric1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.3) ...
/sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link

Collecting ta-lib
  Downloading TA-Lib-0.4.22.tar.gz (268 kB)
     |████████████████████████████████| 268 kB 4.3 MB/s 
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from ta-lib) (1.19.5)
Building wheels for collected packages: ta-lib
  Building wheel for ta-lib (PEP 517) ... done
  Created wheel for ta-lib: filename=TA_Lib-0.4.22-cp37-cp37m-linux_x86_64.whl size=1465669 sha256=19ce1fdaaa80a58ecfab1c42d146c7b72f2231b92e162e833e2364e590e21be8
  Stored in directory: /root/.cache/pip/wheels/7b/63/a9/144081748d9c4f0a09b4670c7b3c414bcb33ff97f0724c753a
Successfully built ta-lib
Installing collected packages: ta-lib
Successfully installed ta-lib-0.4.22
In [23]:
import tensorflow
import statsmodels.tsa.api
import keras
import sklearn
In [24]:
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, LSTM, Dropout, Bidirectional,BatchNormalization, Embedding, TimeDistributed, LeakyReLU, GRU
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
In [25]:
from keras.models import Sequential, load_model
from keras.layers import Dense, LSTM, Activation, Dropout
from keras import backend as K
from keras.utils.generic_utils import get_custom_objects
from keras.callbacks import ModelCheckpoint,EarlyStopping
from keras.regularizers import l1_l2
In [26]:
import math
In [27]:
from statsmodels.tsa.api import VAR
from statsmodels.tsa.statespace.varmax import VARMAX,VARMAXResults
In [28]:
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, mean_absolute_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
In [29]:
from matplotlib import pyplot
In [30]:
import json
import datetime
import pandas as pd
import numpy as np
import os
from scipy.stats import kurtosis
import pmdarima as pm
from pmdarima import auto_arima
from talib import abstract
import json
import matplotlib.pyplot as plt
# plt.rcParams.update({'font.size': 16})
from matplotlib.pyplot import figure
from numpy import array
from numpy import hstack
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
In [31]:
from keras.utils.generic_utils import get_custom_objects
from tensorflow.keras.utils import plot_model
In [32]:
import warnings
from statsmodels.tools.sm_exceptions import ConvergenceWarning
warnings.simplefilter('ignore', ConvergenceWarning)

Load Data

In [6]:
from google.colab import drive
drive.mount('/content/drive')
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
In [40]:
cd drive/MyDrive/Stock price prediction/Generated datasets
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Generated datasets
In [41]:
df = pd.read_csv("FULL_Data_google_COVID_bull_bear.csv",parse_dates=[0])
df.tail(10)
Out[41]:
Unnamed: 0 Unnamed: 0.1 Unnamed: 0.1.1 Unnamed: 0.1.1.1 Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp Date search COVID positiveIncrease COVID deathIncrease bull score bear score fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
1592 1592 1781 1781 1781 150.199997 151.429993 150.059998 150.809998 150.809998 56787900.0 150.565717 148.423811 -1.137777 2.817933 154.059677 142.787944 150.767809 5.009368 93.428749 -0.061228 100.779503 -0.039111 103.599003 -0.022436 2021-11-09 19 112313 1258 0.119141 0.111328 NaN NaN NaN NaN
1593 1593 1782 1782 1782 150.020004 150.130005 147.850006 147.919998 147.919998 65187100.0 150.417145 148.729049 -1.236913 2.144358 153.017766 144.440332 148.869268 4.989888 92.922909 -0.061683 99.694365 -0.039762 101.872301 -0.022657 2021-11-10 19 80301 1470 0.154297 0.109375 NaN NaN NaN NaN
1594 1594 1783 1783 1783 148.960007 149.429993 147.679993 147.869995 147.869995 41000000.0 150.110001 149.060477 -1.165047 1.767475 152.595428 145.525526 148.203086 4.989548 92.416471 -0.062129 98.604584 -0.040391 100.137594 -0.022839 2021-11-11 19 94975 1662 0.102845 0.126915 NaN NaN NaN NaN
1595 1595 1784 1784 1784 148.429993 150.399994 147.479996 149.990005 149.990005 63632600.0 149.895715 149.357144 -0.869308 1.420732 152.198608 146.515681 149.394365 5.003879 91.909483 -0.062566 97.510555 -0.040998 98.396260 -0.022980 2021-11-12 19 55499 797 0.157277 0.080595 NaN NaN NaN NaN
1596 1596 1785 1785 1785 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2021-11-13 19 146529 2505 0.139459 0.083243 NaN NaN NaN NaN
1597 1597 1786 1786 1786 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2021-11-14 19 40964 479 0.151261 0.100840 NaN NaN NaN NaN
1598 1598 1787 1787 1787 150.369995 151.880005 149.429993 150.000000 150.000000 59222800.0 149.758571 149.602859 -0.907641 1.229694 152.062246 147.143471 149.798122 5.003946 91.401994 -0.062993 96.412672 -0.041581 96.649685 -0.023077 2021-11-15 22 30290 148 0.136737 0.109389 NaN NaN NaN NaN
1599 1599 1788 1788 1788 149.940002 151.490005 149.339996 151.000000 151.000000 59256200.0 149.718571 149.814763 -0.791320 1.236243 152.287250 147.342277 150.599374 5.010635 90.894052 -0.063410 95.311334 -0.042140 94.899260 -0.023130 2021-11-16 22 138962 1294 0.135531 0.115385 NaN NaN NaN NaN
1600 1600 1789 1789 1789 151.000000 155.000000 150.990005 153.490005 153.490005 88807000.0 150.154286 150.040002 -0.657719 1.467121 152.974245 147.105759 152.526461 5.027099 90.385704 -0.063817 94.206941 -0.042673 93.146378 -0.023135 2021-11-17 22 87626 1290 0.100870 0.126957 NaN NaN NaN NaN
1601 1601 1790 1790 1790 153.710007 158.669998 153.050003 157.869995 157.869995 137659100.0 151.162857 150.450002 -0.609656 2.267825 154.985653 145.914351 156.088817 5.055417 89.877000 -0.064214 93.099895 -0.043179 91.392433 -0.023090 2021-11-18 22 111404 1637 0.145098 0.121569 NaN NaN NaN NaN
In [ ]:
cd ..
In [45]:
cd Archana - LSTM Hybrid/Outputs/sentiment
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid/Outputs/sentiment
In [47]:
pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name().head(5)
Out[47]:
0    Saturday
1      Sunday
3     Tuesday
7    Saturday
8      Sunday
Name: Date, dtype: object
In [46]:
len(pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name())
Out[46]:
497
In [48]:
len(df)
Out[48]:
1602
In [49]:
len(df) - len(pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name())
Out[49]:
1105
In [32]:
df.dropna(inplace=True)
len(df)
Out[32]:
1080
In [51]:
pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name().head(5)
Out[51]:
0    Saturday
1      Sunday
3     Tuesday
7    Saturday
8      Sunday
Name: Date, dtype: object
In [52]:
df.head(5)
Out[52]:
Unnamed: 0 Unnamed: 0.1 Unnamed: 0.1.1 Unnamed: 0.1.1.1 Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp Date search COVID positiveIncrease COVID deathIncrease bull score bear score fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
0 0 189 189 189 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2017-07-01 17 0 0 0.000000 0.00 0.141086 0.147308 0.100437 0.101678
1 1 190 190 190 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2017-07-02 17 0 0 0.200000 0.20 0.141930 0.147118 0.100488 0.100526
2 2 191 191 191 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 2017-07-03 15 0 0 0.666667 0.00 0.142778 0.146810 0.100537 0.099251
3 3 192 192 192 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2017-07-04 15 0 0 0.000000 0.25 0.143631 0.146382 0.100585 0.097860
4 4 193 193 193 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 2017-07-05 15 0 0 0.400000 0.00 0.144487 0.145833 0.100630 0.096361
In [53]:
stock_col= list(df.columns)
stock_col = stock_col[4:len(stock_col)]
In [54]:
dataset_final = df[stock_col]
In [55]:
dataset_final.head(5)
Out[55]:
Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp Date search COVID positiveIncrease COVID deathIncrease bull score bear score fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2017-07-01 17 0 0 0.000000 0.00 0.141086 0.147308 0.100437 0.101678
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2017-07-02 17 0 0 0.200000 0.20 0.141930 0.147118 0.100488 0.100526
2 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 2017-07-03 15 0 0 0.666667 0.00 0.142778 0.146810 0.100537 0.099251
3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2017-07-04 15 0 0 0.000000 0.25 0.143631 0.146382 0.100585 0.097860
4 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 2017-07-05 15 0 0 0.400000 0.00 0.144487 0.145833 0.100630 0.096361

Data Load for Experiments with Technical Indicators & Bull Bear

In [56]:
stock_col= list(df.columns)
stock_col1 = stock_col[4:len(stock_col)-9]
stock_col2 = stock_col[len(stock_col)-4:len(stock_col)]
for i in range(len(stock_col2)):
  stock_col1.append(stock_col2[i])
dataset_final = df[stock_col1]
dataset_final.head(5)
Out[56]:
Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp Date fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2017-07-01 0.141086 0.147308 0.100437 0.101678
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2017-07-02 0.141930 0.147118 0.100488 0.100526
2 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 2017-07-03 0.142778 0.146810 0.100537 0.099251
3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2017-07-04 0.143631 0.146382 0.100585 0.097860
4 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 2017-07-05 0.144487 0.145833 0.100630 0.096361
In [57]:
# Set the date to datetime data
datetime_series = pd.to_datetime(dataset_final['Date'])
datetime_index = pd.DatetimeIndex(datetime_series.values)
dataset_final = dataset_final.set_index(datetime_index)
dataset_final = dataset_final.sort_values(by='Date')
dataset_final = dataset_final.drop(columns='Date')
dataset_final.head(5)
Out[57]:
Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
2017-07-01 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.141086 0.147308 0.100437 0.101678
2017-07-02 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.141930 0.147118 0.100488 0.100526
2017-07-03 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 0.142778 0.146810 0.100537 0.099251
2017-07-04 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.143631 0.146382 0.100585 0.097860
2017-07-05 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 0.144487 0.145833 0.100630 0.096361

Train & test Dataset for Multistep Process

In [59]:
# Get features and target
X_value = pd.DataFrame(dataset_final.iloc[:, :])
y_value = pd.DataFrame(dataset_final.iloc[:, 3])
In [60]:
y_value.head(5)
Out[60]:
Close
2017-07-01 NaN
2017-07-02 NaN
2017-07-03 35.875000
2017-07-04 NaN
2017-07-05 36.022499
In [61]:
# Normalized the data
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
Out[61]:
MinMaxScaler(feature_range=(-1, 1))
In [62]:
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
In [63]:
X_scale_dataset.shape, y_scale_dataset.shape,
Out[63]:
((1602, 24), (1602, 1))
In [64]:
X_value.shape[1]
Out[64]:
24

N Steps Definition

In [65]:
n_steps_in = 3
n_features = X_value.shape[1] #19 features
n_steps_out = 1
In [66]:
# Reshape the data
'''Set the data input steps and output steps, 
    we use 30 days data to predict 1 day price here, 
    reshape it to (None, input_step, number of features) used for LSTM input'''
# Get X/y dataset
def get_X_y(X_data, y_data):
    X = list()
    y = list()
    yc = list()

    length = len(X_data)
    for i in range(0, length, 1):
        # pdb.set_trace()
        X_value = X_data[i: i + n_steps_in][:, :]
        # print('[',i,': ',i,' + ',n_steps_in,'][:, :]')
        y_value = y_data[i + n_steps_in: i + (n_steps_in + n_steps_out)][:, 0]
        # print('[',i,' + ',n_steps_in,': ',i,' + (',n_steps_in,' + ',n_steps_out,')][:, 0]')
        yc_value = y_data[i: i + n_steps_in][:, :]
        if len(X_value) == 3 and len(y_value) == 1:
            X.append(X_value)
            y.append(y_value)
            yc.append(yc_value)

    return np.array(X), np.array(y), np.array(yc)
In [67]:
# get the train test predict index
def predict_index(dataset, X_train, n_steps_in, n_steps_out):

    # get the predict data (remove the in_steps days)
    train_predict_index = dataset.iloc[n_steps_in : X_train.shape[0] + n_steps_in + n_steps_out - 1, :].index
    test_predict_index = dataset.iloc[X_train.shape[0] + n_steps_in:, :].index

    return train_predict_index, test_predict_index
In [68]:
def mean_absolute_percentage_error(actual, prediction):
    actual = pd.Series(actual)
    prediction = pd.Series(prediction)
    return 100 * np.mean(np.abs((actual - prediction))/actual)
In [69]:
# Split train/test dataset
def split_train_test(data):
    train_size = round(len(X) * 0.75)
    data_train = data[0:train_size]
    data_test = data[train_size:]
    return data_train, data_test
In [70]:
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
yc_train, yc_test, = split_train_test(yc)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
In [71]:
# %% --------------------------------------- Save dataset -----------------------------------------------------------------
print('X shape: ', X.shape)
print('y shape: ', y.shape)
print('X_train shape: ', X_train.shape)
print('y_train shape: ', y_train.shape)
print('y_c_train shape: ', yc_train.shape)
print('X_test shape: ', X_test.shape)
print('y_test shape: ', y_test.shape)
print('y_c_test shape: ', yc_test.shape)
print('index_train shape:', index_train.shape)
print('index_test shape:', index_test.shape)
X shape:  (1599, 3, 24)
y shape:  (1599, 1)
X_train shape:  (1199, 3, 24)
y_train shape:  (1199, 1)
y_c_train shape:  (1199, 3, 1)
X_test shape:  (400, 3, 24)
y_test shape:  (400, 1)
y_c_test shape:  (400, 3, 1)
index_train shape: (1199,)
index_test shape: (400,)
In [72]:
output_dim = y_train.shape[1]
output_dim
Out[72]:
1
In [73]:
df = dataset_final.copy()
In [74]:
df.rename(columns={'Date':'date','Open':'open','Low':'low','Close':'close','Volume':'volume','High':'high'}, inplace = True)
df.reset_index(drop=True,inplace=True)
In [75]:
df.head(1)
Out[75]:
open high low close Adj Close volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.141086 0.147308 0.100437 0.101678
In [76]:
# df.drop(['volume', 'MACD','20SD','logmomentum','absolute of 3 comp','angle of 3 comp','absolute of 6 comp','angle of 6 comp','absolute of 9 comp','angle of 9 comp'], axis='columns', inplace=True) # only keep columns that can help as residuals in Arima Hybrid
In [77]:
df.head(1)
Out[77]:
open high low close Adj Close volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.141086 0.147308 0.100437 0.101678

Train & Test Length

In [78]:
test_len = len(X_test)
In [79]:
train_len = len(X_train )
In [80]:
test_len, train_len
Out[80]:
(400, 1199)

Kurtosis Review

In [81]:
# Initialize moving averages from Ta-Lib, store functions in dictionary
# talib_moving_averages = ['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT', 'MIDPRICE', 'T3', 'TEMA', 'TRIMA'] remove midprice due to outputbeing univariate
talib_moving_averages = ['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT',  'T3', 'TEMA', 'TRIMA'] 
functions = {}
for ma in talib_moving_averages:
      functions[ma] = abstract.Function(ma)

    # Determine kurtosis "K" values for MA period 4-30
kurtosis_results = {'period': []}
for i in range(4, 100): # 100
  kurtosis_results['period'].append(i)
  for ma in talib_moving_averages:
              # Run moving average, remove last N days (used later for test data set), trim MA result to last 30 days
              ma_output = functions[ma](df[:-test_len], i).tail(14)
              # Determine kurtosis "K" value
              k = kurtosis(ma_output, fisher=False)
              # add to dictionary
              if ma not in kurtosis_results.keys():
                  kurtosis_results[ma] = []
              kurtosis_results[ma].append(k)

kurtosis_results = pd.DataFrame(kurtosis_results)
kurtosis_results.to_csv('kurtosis_results.csv')
In [82]:
kurtosis_results.head(5)
Out[82]:
period SMA EMA WMA DEMA KAMA MIDPOINT T3 TEMA TRIMA
0 4 NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 5 NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 6 NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 7 NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 8 NaN NaN NaN NaN NaN NaN NaN NaN NaN

Optimized periods

In [83]:
# Determine period with K closest to 3 +/-5%
optimized_period = {}
# https://pypi.org/project/TA-Lib/ determines the type of moving average to use
# https://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.at.html#pandas.DataFrame.at
for ma in talib_moving_averages:
        difference = np.abs(kurtosis_results[ma] - 3)
        df_arimahyb = pd.DataFrame({'difference': difference, 'period': kurtosis_results['period']})
        df_arimahyb = df_arimahyb.sort_values(by=['difference'], ascending=True).reset_index(drop=True)
        if df_arimahyb.at[0, 'difference'] < 3 * 0.05:
            optimized_period[ma] = df_arimahyb.at[0, 'period']
        else:
            print(ma + ' is not viable, best K greater or less than 3 +/-5%')

print('\nOptimized periods:', optimized_period)
SMA is not viable, best K greater or less than 3 +/-5%
EMA is not viable, best K greater or less than 3 +/-5%
WMA is not viable, best K greater or less than 3 +/-5%
DEMA is not viable, best K greater or less than 3 +/-5%
KAMA is not viable, best K greater or less than 3 +/-5%
MIDPOINT is not viable, best K greater or less than 3 +/-5%
T3 is not viable, best K greater or less than 3 +/-5%
TEMA is not viable, best K greater or less than 3 +/-5%
TRIMA is not viable, best K greater or less than 3 +/-5%

Optimized periods: {}
In [84]:
optimized_period
Out[84]:
{}

Simulation Keys

In [ ]:
simulation = {}
for ma in optimized_period:
        print(ma)
        print(functions[ma])
        print ( int( optimized_period[ma]))
        # if ma in ['EMA','WMA','DEMA','KAMA','MIDPOINT']:
        #   print(ma)
        low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
        low_vol = low_vol.fillna(0)
        high_vol = pd.DataFrame()
        df2 = df.copy()
        for i in df2.columns:
          if i in low_vol.columns:
            high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9
In [ ]:
low_vol.tail(20)
Out[ ]:
open high low close Adj Close volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp search
1060 140.200839 141.942909 138.524500 140.171495 139.966842 8.852448e+07 142.165478 146.699207 1.815578 4.572948 155.845103 137.553312 140.365562 4.935800 105.739092 -0.047411 125.318767 -0.018291 140.471430 -0.008749 19.573385
1061 139.425914 141.705469 138.035200 140.698014 140.492650 8.620711e+07 141.528981 145.978836 2.115887 4.189393 154.357621 137.600050 140.587196 4.939545 105.263514 -0.048037 124.464999 -0.019222 139.335869 -0.009472 19.632022
1062 140.773058 142.636405 139.932338 141.733666 141.526843 7.421445e+07 141.294887 145.298477 2.211018 3.647690 152.593858 138.003097 141.351509 4.946870 104.786174 -0.048658 123.598217 -0.020150 138.164839 -0.010188 19.698090
1063 142.179695 143.266994 141.127848 142.249061 142.041527 6.519616e+07 141.224295 144.665584 2.093072 3.241276 151.148137 138.183031 141.949877 4.950518 104.307114 -0.049275 122.718682 -0.021074 136.959041 -0.010898 19.763505
1064 142.253947 144.008334 141.546689 142.555532 142.347589 6.254214e+07 141.336839 144.184381 1.988881 2.884864 149.954110 138.414652 142.353647 4.952685 103.826381 -0.049886 121.826667 -0.021994 135.719217 -0.011600 20.311676
1065 142.782738 143.732491 141.438660 142.125353 141.918068 6.542511e+07 141.385297 143.758659 1.774804 2.626682 149.012024 138.505294 142.201451 4.949632 103.344020 -0.050491 120.922446 -0.022909 134.446150 -0.012293 20.671514
1066 142.153085 142.656915 140.466684 141.564232 141.357788 7.040262e+07 141.585336 143.387397 1.634667 2.376817 148.141030 138.633764 141.776638 4.945637 102.860075 -0.051092 120.006305 -0.023818 133.140665 -0.012977 20.900131
1067 142.177201 143.194327 140.977156 142.610382 142.402435 6.948112e+07 141.933749 143.094536 1.573317 2.074153 147.242842 138.946230 142.332468 4.953023 102.374593 -0.051687 119.078535 -0.024722 131.803627 -0.013650 21.038585
1068 143.009006 144.052615 142.286776 143.812497 143.602819 6.805244e+07 142.378675 142.879716 1.473333 1.874158 146.628032 139.131400 143.319154 4.961467 101.887619 -0.052275 118.139433 -0.025618 130.435938 -0.014311 21.116168
1069 143.380322 145.547752 142.940349 145.397429 145.185452 7.592729e+07 142.902069 142.813890 1.447641 1.844159 146.502207 139.125573 144.704671 4.972505 101.399198 -0.052858 117.189304 -0.026508 129.038540 -0.014959 21.153587
1070 145.337970 147.615882 144.980528 147.444584 147.229635 7.653090e+07 143.644287 142.961273 1.284466 2.010227 146.981728 138.940819 146.531280 4.986604 100.909377 -0.053435 116.228458 -0.027389 127.612408 -0.015592 21.165321
1071 147.375283 149.163050 146.995423 148.921380 148.704294 6.811986e+07 144.553694 143.236380 0.961952 2.270386 147.777152 138.695607 148.124680 4.996737 100.418203 -0.054006 115.257214 -0.028261 126.158555 -0.016211 21.161363
1072 148.656821 150.010875 148.071943 149.870634 149.652170 6.425222e+07 145.660163 143.530869 0.589081 2.556352 148.643574 138.418164 149.288649 5.003230 99.925720 -0.054570 114.275894 -0.029124 124.678027 -0.016812 21.148490
1073 149.806550 150.715254 149.026204 149.977942 149.759331 6.069918e+07 146.862121 143.785380 0.135134 2.805932 149.397244 138.173516 149.748178 5.003989 99.431976 -0.055128 113.284828 -0.029977 123.171903 -0.017396 21.131204
1074 149.937482 150.666013 149.022091 149.911667 149.693162 5.465321e+07 147.905162 144.001463 -0.245163 3.045742 150.092948 137.909978 149.857170 5.003545 98.937018 -0.055679 112.284350 -0.030820 121.641290 -0.017961 25.016406
1075 150.228161 151.254072 149.586503 150.104281 149.885502 5.602702e+07 148.803988 144.237215 -0.571069 3.270011 150.777237 137.697192 150.021910 5.004835 98.440892 -0.056223 111.274800 -0.031650 120.087330 -0.018506 27.455491
1076 150.328251 150.997797 149.591175 149.912656 149.694163 5.484778e+07 149.449021 144.548659 -0.850904 3.458615 151.465890 137.631428 149.949074 5.003520 97.943645 -0.056759 110.256524 -0.032469 118.511190 -0.019029 28.912854
1077 150.525566 152.430694 150.099878 151.531571 151.310718 7.580033e+07 150.032876 144.967153 -0.975625 3.719924 152.407001 137.527305 151.004072 5.014296 97.445324 -0.057289 109.229873 -0.033274 116.914063 -0.019528 29.716707
1078 149.301052 151.688142 148.723104 151.137179 150.916905 1.012990e+08 150.349418 145.413317 -0.891585 3.905336 153.223988 137.602646 151.092810 5.011652 96.945977 -0.057811 108.195203 -0.034066 115.297171 -0.020004 30.096629
1079 149.321425 151.018197 148.455004 150.396057 150.176865 9.262134e+07 150.424479 145.823313 -0.852689 3.878291 153.579894 138.066731 150.628308 5.006660 96.445650 -0.058325 107.152874 -0.034844 113.661756 -0.020453 27.283213
In [ ]:
high_vol.head(10)
Out[ ]:
open high low close Adj Close volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp search
0 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 15.0
1 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 15.0
2 35.755001 35.875000 35.602501 35.682499 33.872143 96515200.0 35.984999 36.495238 0.346702 0.677629 37.850495 35.139980 35.784949 3.546235 38.027974 0.051918 30.209839 0.095602 43.557403 -0.053820 15.0
3 35.724998 36.187500 35.724998 36.044998 34.216255 76806800.0 36.001071 36.362023 0.387422 0.387634 37.137291 35.586756 35.958315 3.556633 37.818962 0.054401 30.470232 0.091907 43.662260 -0.053608 15.0
4 36.027500 36.487499 35.842499 36.264999 34.425095 84362400.0 35.973571 36.243809 0.388315 0.308042 36.859893 35.627725 36.162771 3.562891 37.613953 0.056893 30.735430 0.088177 43.752965 -0.053302 14.0
5 36.182499 36.462502 36.095001 36.382500 34.536625 79127200.0 36.039642 36.202738 0.372153 0.308860 36.820458 35.585018 36.309257 3.566217 37.412947 0.059392 31.005161 0.084416 43.829622 -0.052901 14.0
6 36.467499 36.544998 36.205002 36.435001 34.586472 99538000.0 36.101071 36.206547 0.317572 0.295861 36.798268 35.614826 36.393086 3.567700 37.215939 0.061899 31.279154 0.080632 43.892360 -0.052406 14.0
7 36.375000 37.122501 36.360001 36.942501 35.068211 100797600.0 36.253571 36.220595 0.322643 0.340687 36.901969 35.539221 36.759363 3.581920 37.022928 0.064410 31.557136 0.076830 43.941338 -0.051818 14.0
8 36.992500 37.332500 36.832500 37.259998 35.369610 80528400.0 36.430357 36.266785 0.257925 0.410484 37.087753 35.445818 37.093120 3.590715 36.833908 0.066926 31.838833 0.073014 43.976744 -0.051137 14.0
9 37.205002 37.724998 37.142502 37.389999 35.493000 95174000.0 36.674285 36.329523 0.184267 0.445597 37.220717 35.438330 37.291039 3.594294 36.648875 0.069445 32.123972 0.069192 43.998789 -0.050365 16.0

Common Functions

In [77]:
def get_arima(dataframe,original_data, train_len, test_len):
    # prepare train and test data
    X_value = pd.DataFrame(dataframe.iloc[:, :])
    y_value = pd.DataFrame(dataframe.iloc[:, 3])
    X_train, X_test = split_train_test(X_value)
    y_train, y_test = split_train_test(y_value)
    yc_train,yc_test = split_train_test(original_data)
    # y_train_ = y_train['close'].to_list()
    # y_test_ = y_test['close'].to_list()
    yc = yc_test.values.tolist()
    y_train_list = y_train['close'].values.tolist() 
    y_test_list = y_test['close'].values.tolist()                                           
      
    # Initialize model
    model = auto_arima(y_train_list,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
                  suppress_warnings=True,stepwise=True,seasonal=True)
    print(model.summary())
        # Determine model parameters
    model.fit(y_train_list,disp= 0)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

        # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
            model = pmdarima.ARIMA(order=order)
            model.fit(y_train_list,disp= 0)
            # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
            prediction.append(model.predict()[0])
            y_train_list.append(y_test_list[i])

    # Generate error data
    mse = mean_squared_error(yc_test, prediction)
    rmse = mse ** 0.5
    # mape = mean_absolute_percentage_error(pd.Series(yc_test).values.tolist(), pd.Series(predictionte).values.tolist() )
    mae = mean_absolute_error(pd.Series(yc_test).values.tolist() , pd.Series(prediction).values.tolist() )
    return yc, prediction, mse, rmse, mae
In [95]:
def plot_train(simulation,SIM):
  train_predict_index = np.load("index_train_appl.npy", allow_pickle=True)#Dates for train data

  predict_result = pd.DataFrame()
  for i in range(len(simulation[SIM]['final_tr']['prediction'])):
          y_predict = pd.DataFrame(simulation[SIM]['final_tr']['prediction'][i], columns=["predicted_price"],
                                  index=train_predict_index[i:i + output_dim])
          predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
          
          #This is a dataframe with each column containing the predicted daily closing price
  real_price = pd.DataFrame()
  for i in range(len(simulation[SIM]['final_tr']['original'])):
          y_train = pd.DataFrame(simulation[SIM]['final_tr']['original'][i], columns=["real_price"],
                                index=train_predict_index[i:i + output_dim])
          real_price = pd.concat([real_price, y_train], axis=1, sort=False)  #This is a dataframe with each column containing the real daily closing price

  predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
  real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
      #
      # Plot the predicted result
  plt.figure(figsize=(16, 8))
  plt.plot(real_price["real_mean"])
  plt.plot(predict_result["predicted_mean"], color='r')
  plt.xlabel("Date")
  plt.ylabel("Stock price")
  plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
  plt.title(f"The result of Training for Hybrid Arima LSTM with MA - {SIM} : {fileimg}",fontsize=20)
  plt.show()

      # Calculate RMSE
  predicted = predict_result["predicted_mean"]
  real = real_price["real_mean"]
  RMSE = np.sqrt(mean_squared_error(predicted, real))
  MSE = mean_squared_error(predicted, real)
  MAE = mean_absolute_error(predicted, real)
  print(f"----- Train RMSE for {SIM} -----", RMSE)
  print(f"----- Train_MSE_LSTM for {SIM} -----", MSE)
  print(f"----- Train MAE LSTM for {SIM} -----", MAE)
In [96]:
def plot_test(simulation, SIM):
  test_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data

      # rescaled_real_y = y_scaler.inverse_transform(y_train)#Real closing price data
      # rescaled_predicted_y = y_scaler.inverse_transform(train_yhat)#Predicted closing price data

  predict_result = pd.DataFrame()
  for i in range(len(simulation[SIM]['final']['prediction'])):
          y_predict = pd.DataFrame(simulation[SIM]['final']['prediction'][i], columns=["predicted_price"],
                                  index=test_predict_index[i:i + output_dim])
          predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)#This is a dataframe with each column containing the predicted daily closing price
      #
  real_price = pd.DataFrame()
  for i in range(len(simulation[SIM]['final']['original'])):
          y_train = pd.DataFrame(simulation[SIM]['final']['original'][i], columns=["real_price"],
                                index=test_predict_index[i:i + output_dim])
          real_price = pd.concat([real_price, y_train], axis=1, sort=False)#This is a dataframe with each column containing the real daily closing price

  predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
  real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
      #
      # Plot the predicted result
  plt.figure(figsize=(16, 8))
  plt.plot(real_price["real_mean"])
  plt.plot(predict_result["predicted_mean"], color='r')
  plt.xlabel("Date")
  plt.ylabel("Stock price")
  plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
  plt.title(f"The result of Testing for Hybrid Arima LSTM with MA - {SIM} : {fileimg}",fontsize=20)
  plt.show()

      # Calculate RMSE
  predicted = predict_result["predicted_mean"]
  real = real_price["real_mean"]
  RMSE = np.sqrt(mean_squared_error(predicted, real))
  MSE = mean_squared_error(predicted, real)
  MAE = mean_absolute_error(predicted, real)
  print(f"----- Test RMSE for {SIM}-----", RMSE)
  print(f"----- Test_MSE_LSTM for {SIM}-----", MSE)
  print(f"----- Test_MAE_LSTM for {SIM}-----", MAE)
In [10]:
def plot_train_high(simulation, SIM):
  train_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data

  predict_result = pd.DataFrame()
  for i in range(len(simulation[SIM]['high_vol']['prediction'])):
          y_predict = pd.DataFrame(simulation[SIM]['high_vol']['prediction'][i], columns=["predicted_price"],
                                  index=train_predict_index[i:i + output_dim])
          predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
          
          #This is a dataframe with each column containing the predicted daily closing price
  real_price = pd.DataFrame()
  for i in range(len(simulation[SIM]['high_vol']['original'])):
          y_train = pd.DataFrame(simulation[SIM]['high_vol']['original'][i], columns=["real_price"],
                                index=train_predict_index[i:i + output_dim])
          real_price = pd.concat([real_price, y_train], axis=1, sort=False)  #This is a dataframe with each column containing the real daily closing price

  predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
  real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
      #
      # Plot the predicted result
  plt.figure(figsize=(16, 8))
  plt.plot(real_price["real_mean"])
  plt.plot(predict_result["predicted_mean"], color='r')
  plt.xlabel("Date")
  plt.ylabel("Stock price")
  plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
  plt.title(f"The result of Training for Hybrid Arima LSTM with MA {SIM}", fontsize=20)
  plt.show()

      # Calculate RMSE
  predicted = predict_result["predicted_mean"]
  real = real_price["real_mean"]
  RMSE = np.sqrt(mean_squared_error(predicted, real))
  MSE = mean_squared_error(predicted, real)
  MAE = mean_absolute_error(predicted, real)
  print(f"----- Individual LSTM RMSE for {SIM} -----", RMSE)
  print(f"----- Individual LSTM_MSE_LSTM for {SIM} -----", MSE)
  print(f"----- Individual LSTM MAE LSTM for {SIM} -----", MAE)
In [81]:
def plot_train_low(simulation , SIM):
  train_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data

  predict_result = pd.DataFrame()
  for i in range(len(simulation[SIM]['low_vol']['prediction'])):
          y_predict = pd.DataFrame(simulation[SIM]['low_vol']['prediction'][i], columns=["predicted_price"],
                                  index=train_predict_index[i:i + output_dim])
          predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
          
          #This is a dataframe with each column containing the predicted daily closing price
  real_price = pd.DataFrame()
  for i in range(len(simulation[SIM]['low_vol']['original'])):
          y_train = pd.DataFrame(simulation[SIM]['low_vol']['original'][i], columns=["real_price"],
                                index=train_predict_index[i:i + output_dim])
          real_price = pd.concat([real_price, y_train], axis=1, sort=False)  #This is a dataframe with each column containing the real daily closing price

  predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
  real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
      #
      # Plot the predicted result
  plt.figure(figsize=(16, 8))
  plt.plot(real_price["real_mean"])
  plt.plot(predict_result["predicted_mean"], color='r')
  plt.xlabel("Date")
  plt.ylabel("Stock price")
  plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
  plt.title(f"The result of Training for {SIM}", fontsize=20)
  plt.show()

      # Calculate RMSE
  predicted = predict_result["predicted_mean"]
  real = real_price["real_mean"]
  RMSE = np.sqrt(mean_squared_error(predicted, real))
  MSE = mean_squared_error(predicted, real)
  MAE = mean_absolute_error(predicted, real)
  print(f"-----Arima RMSE for {SIM} -----", RMSE)
  print(f"----- Arima MSE for {SIM} -----", MSE)
  print(f"----- Arima MAE for {SIM} -----", MAE)
In [82]:
import os
os.getcwd()
Out[82]:
'/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid/Outputs/sentiment'

Univariate Arima Multistep MutiVariate LSTM Hybrid Model Experiment 1

In [83]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    model = Sequential()
    model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    model.add(Dense(units=64,activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(units=output_dim))
    model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    ## Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()


    # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 3
    # define custom activation
    # cts().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [84]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation1 = {}
    imgfile = 'Experiment1'
    for ma in optimized_period:
              print(ma)
              print(functions[ma])
              print ( int( optimized_period[ma]))
            # if ma == 'SMA':
              low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
              low_vol = low_vol.fillna(0)
              low_vol_data = df['close']
              high_vol = pd.DataFrame()
              df2 = df.copy()
              for i in df2.columns:
                if i in low_vol.columns:
                  high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
              high_vol_data = df['close']
              ## *****************************************************
              # Generate ARIMA and LSTM predictions
              print('\nWorking on ' + ma + ' predictions')
              try:
                print('parameters used : ', train_len, test_len)
                low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
              except:
                  print('ARIMA error, skipping to next MA type')
                  continue
              Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
              final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
              mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
              rmse_ftr = mse_ftr ** 0.5
              mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
              mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

              final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
              mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
              rmse = mse ** 0.5
              mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              # Generate prediction accuracy
              actual = df['close'].tail(test_len).values
              result_1 = []
              result_2 = []
              for i in range(1, len(final_prediction)):
                  # Compare prediction to previous close price
                  if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                      result_1.append(1)
                  elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                      result_1.append(1)
                  else:
                      result_1.append(0)

                  # Compare prediction to previous prediction
                  if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                      result_2.append(1)
                  elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                      result_2.append(1)
                  else:
                      result_2.append(0)

              accuracy_1 = np.mean(result_1)
              accuracy_2 = np.mean(result_2)

              simulation1[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                            'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                            'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                            'rmse': rmse_ftr, 'mae' : mae_ftr},
                                'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                          'rmse': rmse, 'mae': mae },
                                'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

              # save simulation data here as checkpoint
              with open('simulation1_data.json', 'w') as fp:
                  json.dump(simulation1, fp)

              for ma in simulation1.keys():
                  print('\n' + ma)
                  print('Prediction vs Close:\t\t' + str(round(100*simulation1[ma]['accuracy']['prediction vs close'], 2))
                        + '% Accuracy')
                  print('Prediction vs Prediction:\t' + str(round(100*simulation1[ma]['accuracy']['prediction vs prediction'], 2))
                        + '% Accuracy')
                  print('MSE:\t', simulation1[ma]['final']['mse'],
                        '\nRMSE:\t', simulation1[ma]['final']['rmse'],
                        '\nMAPE:\t', simulation1[ma]['final']['mae'])#,
                        # '\nMAPE:\t', simulation[ma]['final']['mape'])
            # else:
            #   break
            # code you want to evaluate
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.54 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4157.020, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3687.148, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.20 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3458.651, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3322.133, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.80 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.88 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3324.133, Time=0.22 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.922 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1657.067
Date:                Sun, 12 Dec 2021   AIC                           3322.133
Time:                        13:22:03   BIC                           3340.897
Sample:                             0   HQIC                          3329.339
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1966      0.003   -387.226      0.000      -1.203      -1.191
ar.L2         -0.8952      0.006   -138.692      0.000      -0.908      -0.883
ar.L3         -0.3968      0.006    -68.284      0.000      -0.408      -0.385
sigma2         3.5858      0.017    214.535      0.000       3.553       3.619
===================================================================================
Ljung-Box (L1) (Q):                  14.47   Jarque-Bera (JB):           2428881.42
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       271.99
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.16778, saving model to LSTM1.h5
48/48 - 3s - loss: 0.1712 - val_loss: 0.1678 - lr: 0.0010 - 3s/epoch - 62ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.16778
48/48 - 0s - loss: 0.1883 - val_loss: 0.3040 - lr: 0.0010 - 419ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.16778
48/48 - 1s - loss: 0.0672 - val_loss: 0.5210 - lr: 0.0010 - 519ms/epoch - 11ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.16778
48/48 - 0s - loss: 0.0577 - val_loss: 0.2578 - lr: 0.0010 - 407ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.16778 to 0.16001, saving model to LSTM1.h5
48/48 - 0s - loss: 0.0489 - val_loss: 0.1600 - lr: 0.0010 - 456ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.16001 to 0.07747, saving model to LSTM1.h5
48/48 - 0s - loss: 0.0417 - val_loss: 0.0775 - lr: 0.0010 - 478ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.07747 to 0.03761, saving model to LSTM1.h5
48/48 - 0s - loss: 0.0437 - val_loss: 0.0376 - lr: 0.0010 - 452ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.03761 to 0.01788, saving model to LSTM1.h5
48/48 - 0s - loss: 0.0414 - val_loss: 0.0179 - lr: 0.0010 - 445ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01788
48/48 - 0s - loss: 0.0381 - val_loss: 0.0802 - lr: 0.0010 - 469ms/epoch - 10ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.01788 to 0.00515, saving model to LSTM1.h5
48/48 - 0s - loss: 0.0452 - val_loss: 0.0051 - lr: 0.0010 - 380ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00515
48/48 - 0s - loss: 0.0571 - val_loss: 0.0147 - lr: 0.0010 - 430ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.00515 to 0.00495, saving model to LSTM1.h5
48/48 - 0s - loss: 0.0581 - val_loss: 0.0050 - lr: 0.0010 - 433ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0547 - val_loss: 0.0245 - lr: 0.0010 - 368ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0348 - val_loss: 0.0158 - lr: 0.0010 - 404ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0303 - val_loss: 0.3462 - lr: 0.0010 - 438ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00495
48/48 - 1s - loss: 0.0273 - val_loss: 0.1067 - lr: 0.0010 - 504ms/epoch - 11ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00017: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0265 - val_loss: 0.0396 - lr: 0.0010 - 439ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0289 - val_loss: 0.0407 - lr: 1.0000e-04 - 402ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0249 - val_loss: 0.0413 - lr: 1.0000e-04 - 405ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0251 - val_loss: 0.0442 - lr: 1.0000e-04 - 396ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0251 - val_loss: 0.0440 - lr: 1.0000e-04 - 434ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00022: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0253 - val_loss: 0.0419 - lr: 1.0000e-04 - 358ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0254 - val_loss: 0.0413 - lr: 1.0000e-05 - 408ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0232 - val_loss: 0.0415 - lr: 1.0000e-05 - 492ms/epoch - 10ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0216 - val_loss: 0.0410 - lr: 1.0000e-05 - 444ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0225 - val_loss: 0.0413 - lr: 1.0000e-05 - 442ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00027: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0239 - val_loss: 0.0411 - lr: 1.0000e-05 - 470ms/epoch - 10ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0229 - val_loss: 0.0413 - lr: 1.0000e-05 - 449ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0239 - val_loss: 0.0417 - lr: 1.0000e-05 - 386ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0218 - val_loss: 0.0418 - lr: 1.0000e-05 - 448ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0213 - val_loss: 0.0420 - lr: 1.0000e-05 - 429ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0228 - val_loss: 0.0420 - lr: 1.0000e-05 - 482ms/epoch - 10ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0234 - val_loss: 0.0415 - lr: 1.0000e-05 - 463ms/epoch - 10ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0225 - val_loss: 0.0423 - lr: 1.0000e-05 - 423ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0239 - val_loss: 0.0423 - lr: 1.0000e-05 - 437ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0226 - val_loss: 0.0420 - lr: 1.0000e-05 - 441ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0222 - val_loss: 0.0413 - lr: 1.0000e-05 - 410ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0233 - val_loss: 0.0408 - lr: 1.0000e-05 - 421ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0231 - val_loss: 0.0411 - lr: 1.0000e-05 - 448ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0220 - val_loss: 0.0406 - lr: 1.0000e-05 - 418ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0221 - val_loss: 0.0400 - lr: 1.0000e-05 - 402ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0266 - val_loss: 0.0399 - lr: 1.0000e-05 - 421ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0227 - val_loss: 0.0402 - lr: 1.0000e-05 - 448ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0222 - val_loss: 0.0398 - lr: 1.0000e-05 - 452ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0200 - val_loss: 0.0395 - lr: 1.0000e-05 - 431ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0247 - val_loss: 0.0391 - lr: 1.0000e-05 - 424ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0231 - val_loss: 0.0380 - lr: 1.0000e-05 - 473ms/epoch - 10ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0209 - val_loss: 0.0377 - lr: 1.0000e-05 - 465ms/epoch - 10ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0230 - val_loss: 0.0366 - lr: 1.0000e-05 - 432ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0243 - val_loss: 0.0358 - lr: 1.0000e-05 - 427ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0213 - val_loss: 0.0364 - lr: 1.0000e-05 - 419ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0240 - val_loss: 0.0372 - lr: 1.0000e-05 - 383ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0247 - val_loss: 0.0364 - lr: 1.0000e-05 - 374ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0224 - val_loss: 0.0363 - lr: 1.0000e-05 - 401ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0240 - val_loss: 0.0361 - lr: 1.0000e-05 - 431ms/epoch - 9ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0228 - val_loss: 0.0369 - lr: 1.0000e-05 - 410ms/epoch - 9ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0221 - val_loss: 0.0372 - lr: 1.0000e-05 - 399ms/epoch - 8ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0231 - val_loss: 0.0376 - lr: 1.0000e-05 - 407ms/epoch - 8ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0218 - val_loss: 0.0367 - lr: 1.0000e-05 - 468ms/epoch - 10ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0215 - val_loss: 0.0360 - lr: 1.0000e-05 - 389ms/epoch - 8ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0198 - val_loss: 0.0352 - lr: 1.0000e-05 - 461ms/epoch - 10ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00495
48/48 - 0s - loss: 0.0232 - val_loss: 0.0353 - lr: 1.0000e-05 - 457ms/epoch - 10ms/step
Epoch 00062: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.29635655476263 
RMSE:	 6.268680607174258 
MAPE:	 5.07416987111494
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.41 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4231.556, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3761.238, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.27 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3532.227, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3394.496, Time=0.10 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.91 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.65 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3396.496, Time=0.20 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.699 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1693.248
Date:                Sun, 12 Dec 2021   AIC                           3394.496
Time:                        13:23:46   BIC                           3413.260
Sample:                             0   HQIC                          3401.702
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.569      0.000      -1.204      -1.192
ar.L2         -0.8976      0.006   -139.811      0.000      -0.910      -0.885
ar.L3         -0.3984      0.006    -68.662      0.000      -0.410      -0.387
sigma2         3.9230      0.018    215.372      0.000       3.887       3.959
===================================================================================
Ljung-Box (L1) (Q):                  14.54   Jarque-Bera (JB):           2462173.05
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.82
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.25069, saving model to LSTM1.h5
16/16 - 2s - loss: 0.3653 - val_loss: 0.2507 - lr: 0.0010 - 2s/epoch - 106ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.25069 to 0.02507, saving model to LSTM1.h5
16/16 - 0s - loss: 0.1125 - val_loss: 0.0251 - lr: 0.0010 - 163ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0745 - val_loss: 0.2931 - lr: 0.0010 - 149ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0906 - val_loss: 0.0958 - lr: 0.0010 - 135ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0673 - val_loss: 0.0462 - lr: 0.0010 - 131ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0710 - val_loss: 0.0599 - lr: 0.0010 - 149ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0626 - val_loss: 0.0678 - lr: 0.0010 - 179ms/epoch - 11ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0406 - val_loss: 0.0662 - lr: 1.0000e-04 - 162ms/epoch - 10ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0345 - val_loss: 0.0645 - lr: 1.0000e-04 - 146ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0382 - val_loss: 0.0630 - lr: 1.0000e-04 - 147ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0375 - val_loss: 0.0592 - lr: 1.0000e-04 - 139ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0365 - val_loss: 0.0586 - lr: 1.0000e-04 - 153ms/epoch - 10ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0358 - val_loss: 0.0584 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0351 - val_loss: 0.0587 - lr: 1.0000e-05 - 157ms/epoch - 10ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0382 - val_loss: 0.0586 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0378 - val_loss: 0.0588 - lr: 1.0000e-05 - 166ms/epoch - 10ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0364 - val_loss: 0.0587 - lr: 1.0000e-05 - 156ms/epoch - 10ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0381 - val_loss: 0.0587 - lr: 1.0000e-05 - 154ms/epoch - 10ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0355 - val_loss: 0.0590 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0393 - val_loss: 0.0593 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0379 - val_loss: 0.0590 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0364 - val_loss: 0.0585 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0404 - val_loss: 0.0582 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0334 - val_loss: 0.0584 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0350 - val_loss: 0.0590 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0367 - val_loss: 0.0591 - lr: 1.0000e-05 - 154ms/epoch - 10ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0345 - val_loss: 0.0596 - lr: 1.0000e-05 - 156ms/epoch - 10ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0343 - val_loss: 0.0593 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0348 - val_loss: 0.0592 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0341 - val_loss: 0.0594 - lr: 1.0000e-05 - 186ms/epoch - 12ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0367 - val_loss: 0.0587 - lr: 1.0000e-05 - 170ms/epoch - 11ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0366 - val_loss: 0.0584 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0354 - val_loss: 0.0588 - lr: 1.0000e-05 - 175ms/epoch - 11ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0366 - val_loss: 0.0595 - lr: 1.0000e-05 - 160ms/epoch - 10ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0356 - val_loss: 0.0602 - lr: 1.0000e-05 - 160ms/epoch - 10ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0364 - val_loss: 0.0604 - lr: 1.0000e-05 - 166ms/epoch - 10ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0348 - val_loss: 0.0601 - lr: 1.0000e-05 - 180ms/epoch - 11ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0331 - val_loss: 0.0599 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0327 - val_loss: 0.0597 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0378 - val_loss: 0.0594 - lr: 1.0000e-05 - 186ms/epoch - 12ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0351 - val_loss: 0.0592 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0361 - val_loss: 0.0587 - lr: 1.0000e-05 - 160ms/epoch - 10ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0376 - val_loss: 0.0588 - lr: 1.0000e-05 - 156ms/epoch - 10ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0362 - val_loss: 0.0587 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0341 - val_loss: 0.0581 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0380 - val_loss: 0.0580 - lr: 1.0000e-05 - 157ms/epoch - 10ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0401 - val_loss: 0.0582 - lr: 1.0000e-05 - 168ms/epoch - 11ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0366 - val_loss: 0.0581 - lr: 1.0000e-05 - 163ms/epoch - 10ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0348 - val_loss: 0.0580 - lr: 1.0000e-05 - 164ms/epoch - 10ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0344 - val_loss: 0.0578 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0339 - val_loss: 0.0575 - lr: 1.0000e-05 - 181ms/epoch - 11ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.02507
16/16 - 0s - loss: 0.0369 - val_loss: 0.0578 - lr: 1.0000e-05 - 155ms/epoch - 10ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.29635655476263 
RMSE:	 6.268680607174258 
MAPE:	 5.07416987111494

EMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 30.25057164247689 
RMSE:	 5.50005196725239 
MAPE:	 4.444486270049439
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.41 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4264.089, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3793.930, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.25 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3564.923, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3427.258, Time=0.08 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.41 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.52 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3429.258, Time=0.20 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.029 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1709.629
Date:                Sun, 12 Dec 2021   AIC                           3427.258
Time:                        13:25:07   BIC                           3446.021
Sample:                             0   HQIC                          3434.464
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1981      0.003   -389.386      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.699      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.737      0.000      -0.410      -0.387
sigma2         4.0860      0.019    215.311      0.000       4.049       4.123
===================================================================================
Ljung-Box (L1) (Q):                  14.57   Jarque-Bera (JB):           2460901.70
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.07191, saving model to LSTM1.h5
17/17 - 2s - loss: 0.5361 - val_loss: 0.0719 - lr: 0.0010 - 2s/epoch - 121ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.07191 to 0.04083, saving model to LSTM1.h5
17/17 - 0s - loss: 0.2156 - val_loss: 0.0408 - lr: 0.0010 - 167ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04083 to 0.03980, saving model to LSTM1.h5
17/17 - 0s - loss: 0.0759 - val_loss: 0.0398 - lr: 0.0010 - 174ms/epoch - 10ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.03980
17/17 - 0s - loss: 0.0752 - val_loss: 0.0900 - lr: 0.0010 - 197ms/epoch - 12ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.03980
17/17 - 0s - loss: 0.0786 - val_loss: 0.0610 - lr: 0.0010 - 149ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.03980 to 0.02048, saving model to LSTM1.h5
17/17 - 0s - loss: 0.0640 - val_loss: 0.0205 - lr: 0.0010 - 166ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.02048
17/17 - 0s - loss: 0.0500 - val_loss: 0.3935 - lr: 0.0010 - 151ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02048
17/17 - 0s - loss: 0.0583 - val_loss: 0.0728 - lr: 0.0010 - 146ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02048
17/17 - 0s - loss: 0.0530 - val_loss: 0.0487 - lr: 0.0010 - 160ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.02048 to 0.00685, saving model to LSTM1.h5
17/17 - 0s - loss: 0.0460 - val_loss: 0.0069 - lr: 0.0010 - 176ms/epoch - 10ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0505 - val_loss: 0.1364 - lr: 0.0010 - 144ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0399 - val_loss: 0.0105 - lr: 0.0010 - 149ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0350 - val_loss: 0.0142 - lr: 0.0010 - 192ms/epoch - 11ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0362 - val_loss: 0.0711 - lr: 0.0010 - 163ms/epoch - 10ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00015: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0354 - val_loss: 0.0117 - lr: 0.0010 - 169ms/epoch - 10ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0376 - val_loss: 0.0131 - lr: 1.0000e-04 - 154ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0317 - val_loss: 0.0129 - lr: 1.0000e-04 - 153ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0325 - val_loss: 0.0139 - lr: 1.0000e-04 - 155ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0281 - val_loss: 0.0138 - lr: 1.0000e-04 - 173ms/epoch - 10ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00020: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0307 - val_loss: 0.0125 - lr: 1.0000e-04 - 166ms/epoch - 10ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0300 - val_loss: 0.0125 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0299 - val_loss: 0.0123 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0292 - val_loss: 0.0123 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0296 - val_loss: 0.0122 - lr: 1.0000e-05 - 158ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00025: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0289 - val_loss: 0.0125 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0287 - val_loss: 0.0128 - lr: 1.0000e-05 - 157ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0299 - val_loss: 0.0129 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0295 - val_loss: 0.0130 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0307 - val_loss: 0.0133 - lr: 1.0000e-05 - 168ms/epoch - 10ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0284 - val_loss: 0.0134 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0318 - val_loss: 0.0133 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0295 - val_loss: 0.0136 - lr: 1.0000e-05 - 172ms/epoch - 10ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0285 - val_loss: 0.0135 - lr: 1.0000e-05 - 173ms/epoch - 10ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0281 - val_loss: 0.0135 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0304 - val_loss: 0.0138 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0304 - val_loss: 0.0141 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0303 - val_loss: 0.0139 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0282 - val_loss: 0.0138 - lr: 1.0000e-05 - 153ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0268 - val_loss: 0.0138 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0314 - val_loss: 0.0142 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0280 - val_loss: 0.0142 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0309 - val_loss: 0.0141 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0304 - val_loss: 0.0144 - lr: 1.0000e-05 - 158ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0295 - val_loss: 0.0145 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0296 - val_loss: 0.0147 - lr: 1.0000e-05 - 166ms/epoch - 10ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0330 - val_loss: 0.0146 - lr: 1.0000e-05 - 157ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0286 - val_loss: 0.0146 - lr: 1.0000e-05 - 156ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0307 - val_loss: 0.0142 - lr: 1.0000e-05 - 169ms/epoch - 10ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0280 - val_loss: 0.0140 - lr: 1.0000e-05 - 153ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0290 - val_loss: 0.0138 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0292 - val_loss: 0.0137 - lr: 1.0000e-05 - 161ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0308 - val_loss: 0.0135 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0303 - val_loss: 0.0138 - lr: 1.0000e-05 - 157ms/epoch - 9ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0301 - val_loss: 0.0140 - lr: 1.0000e-05 - 158ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0299 - val_loss: 0.0140 - lr: 1.0000e-05 - 166ms/epoch - 10ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0295 - val_loss: 0.0140 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0293 - val_loss: 0.0138 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0301 - val_loss: 0.0134 - lr: 1.0000e-05 - 158ms/epoch - 9ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0342 - val_loss: 0.0134 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00685
17/17 - 0s - loss: 0.0271 - val_loss: 0.0138 - lr: 1.0000e-05 - 171ms/epoch - 10ms/step
Epoch 00060: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.29635655476263 
RMSE:	 6.268680607174258 
MAPE:	 5.07416987111494

EMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 30.25057164247689 
RMSE:	 5.50005196725239 
MAPE:	 4.444486270049439

WMA
Prediction vs Close:		57.84% Accuracy
Prediction vs Prediction:	41.79% Accuracy
MSE:	 64.92643731055051 
RMSE:	 8.057694292448089 
MAPE:	 6.289003841800032
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.39 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4436.126, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3965.317, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.38 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3736.589, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3598.951, Time=0.08 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.00 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.95 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3600.951, Time=0.18 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.122 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1795.475
Date:                Sun, 12 Dec 2021   AIC                           3598.951
Time:                        13:26:28   BIC                           3617.714
Sample:                             0   HQIC                          3606.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1983      0.003   -389.581      0.000      -1.204      -1.192
ar.L2         -0.8973      0.006   -139.732      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.649      0.000      -0.410      -0.387
sigma2         5.0573      0.023    215.292      0.000       5.011       5.103
===================================================================================
Ljung-Box (L1) (Q):                  14.41   Jarque-Bera (JB):           2460553.80
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.89
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_3 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_3 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 1.03480, saving model to LSTM1.h5
10/10 - 2s - loss: 0.2906 - val_loss: 1.0348 - lr: 0.0010 - 2s/epoch - 166ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 1.03480 to 0.10184, saving model to LSTM1.h5
10/10 - 0s - loss: 0.0729 - val_loss: 0.1018 - lr: 0.0010 - 145ms/epoch - 15ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0690 - val_loss: 0.3825 - lr: 0.0010 - 109ms/epoch - 11ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0559 - val_loss: 0.3389 - lr: 0.0010 - 112ms/epoch - 11ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0561 - val_loss: 0.1873 - lr: 0.0010 - 105ms/epoch - 11ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0567 - val_loss: 0.1386 - lr: 0.0010 - 101ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0669 - val_loss: 0.1327 - lr: 0.0010 - 105ms/epoch - 11ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0567 - val_loss: 0.1416 - lr: 1.0000e-04 - 120ms/epoch - 12ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0443 - val_loss: 0.1491 - lr: 1.0000e-04 - 117ms/epoch - 12ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0430 - val_loss: 0.1551 - lr: 1.0000e-04 - 96ms/epoch - 10ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0399 - val_loss: 0.1599 - lr: 1.0000e-04 - 90ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0371 - val_loss: 0.1642 - lr: 1.0000e-04 - 106ms/epoch - 11ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0344 - val_loss: 0.1644 - lr: 1.0000e-05 - 141ms/epoch - 14ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0371 - val_loss: 0.1646 - lr: 1.0000e-05 - 119ms/epoch - 12ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0331 - val_loss: 0.1650 - lr: 1.0000e-05 - 128ms/epoch - 13ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0365 - val_loss: 0.1656 - lr: 1.0000e-05 - 115ms/epoch - 12ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0348 - val_loss: 0.1660 - lr: 1.0000e-05 - 117ms/epoch - 12ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0391 - val_loss: 0.1664 - lr: 1.0000e-05 - 105ms/epoch - 11ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0390 - val_loss: 0.1670 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0352 - val_loss: 0.1675 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0370 - val_loss: 0.1678 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0318 - val_loss: 0.1682 - lr: 1.0000e-05 - 114ms/epoch - 11ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0363 - val_loss: 0.1688 - lr: 1.0000e-05 - 118ms/epoch - 12ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0355 - val_loss: 0.1689 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0321 - val_loss: 0.1695 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0345 - val_loss: 0.1698 - lr: 1.0000e-05 - 105ms/epoch - 10ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0359 - val_loss: 0.1701 - lr: 1.0000e-05 - 105ms/epoch - 10ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0377 - val_loss: 0.1702 - lr: 1.0000e-05 - 131ms/epoch - 13ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0347 - val_loss: 0.1702 - lr: 1.0000e-05 - 123ms/epoch - 12ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0357 - val_loss: 0.1700 - lr: 1.0000e-05 - 115ms/epoch - 11ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0318 - val_loss: 0.1699 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0379 - val_loss: 0.1708 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0378 - val_loss: 0.1723 - lr: 1.0000e-05 - 117ms/epoch - 12ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0353 - val_loss: 0.1733 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0340 - val_loss: 0.1732 - lr: 1.0000e-05 - 105ms/epoch - 10ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0321 - val_loss: 0.1733 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0382 - val_loss: 0.1737 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0360 - val_loss: 0.1739 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0353 - val_loss: 0.1743 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0308 - val_loss: 0.1740 - lr: 1.0000e-05 - 115ms/epoch - 11ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0347 - val_loss: 0.1742 - lr: 1.0000e-05 - 105ms/epoch - 11ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0338 - val_loss: 0.1743 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0333 - val_loss: 0.1738 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0344 - val_loss: 0.1733 - lr: 1.0000e-05 - 117ms/epoch - 12ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0357 - val_loss: 0.1731 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0369 - val_loss: 0.1735 - lr: 1.0000e-05 - 122ms/epoch - 12ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0343 - val_loss: 0.1740 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0378 - val_loss: 0.1753 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0347 - val_loss: 0.1759 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0349 - val_loss: 0.1764 - lr: 1.0000e-05 - 115ms/epoch - 11ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0360 - val_loss: 0.1769 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.10184
10/10 - 0s - loss: 0.0301 - val_loss: 0.1769 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.29635655476263 
RMSE:	 6.268680607174258 
MAPE:	 5.07416987111494

EMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 30.25057164247689 
RMSE:	 5.50005196725239 
MAPE:	 4.444486270049439

WMA
Prediction vs Close:		57.84% Accuracy
Prediction vs Prediction:	41.79% Accuracy
MSE:	 64.92643731055051 
RMSE:	 8.057694292448089 
MAPE:	 6.289003841800032

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 52.98651621196421 
RMSE:	 7.279183760008 
MAPE:	 5.725540843661134
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.38 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4190.464, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3724.371, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.28 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3494.154, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3357.435, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.23 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.75 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3359.435, Time=0.20 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.079 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1674.717
Date:                Sun, 12 Dec 2021   AIC                           3357.435
Time:                        13:27:39   BIC                           3376.198
Sample:                             0   HQIC                          3364.641
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1955      0.003   -381.246      0.000      -1.202      -1.189
ar.L2         -0.8964      0.007   -135.835      0.000      -0.909      -0.883
ar.L3         -0.3971      0.006    -67.229      0.000      -0.409      -0.385
sigma2         3.7466      0.018    211.623      0.000       3.712       3.781
===================================================================================
Ljung-Box (L1) (Q):                  14.20   Jarque-Bera (JB):           2338363.32
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             3.76
Prob(H) (two-sided):                  0.00   Kurtosis:                       266.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_4 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_4 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.15184, saving model to LSTM1.h5
45/45 - 2s - loss: 0.3564 - val_loss: 0.1518 - lr: 0.0010 - 2s/epoch - 43ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.15184
45/45 - 0s - loss: 0.1500 - val_loss: 0.3345 - lr: 0.0010 - 401ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.15184
45/45 - 0s - loss: 0.0501 - val_loss: 0.7766 - lr: 0.0010 - 368ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.15184 to 0.09772, saving model to LSTM1.h5
45/45 - 0s - loss: 0.0492 - val_loss: 0.0977 - lr: 0.0010 - 437ms/epoch - 10ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.09772 to 0.01025, saving model to LSTM1.h5
45/45 - 0s - loss: 0.0431 - val_loss: 0.0102 - lr: 0.0010 - 464ms/epoch - 10ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0480 - val_loss: 0.1955 - lr: 0.0010 - 408ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0356 - val_loss: 0.0301 - lr: 0.0010 - 414ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0279 - val_loss: 0.0184 - lr: 0.0010 - 402ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0293 - val_loss: 0.0412 - lr: 0.0010 - 365ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0301 - val_loss: 0.0185 - lr: 0.0010 - 433ms/epoch - 10ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0314 - val_loss: 0.0292 - lr: 1.0000e-04 - 451ms/epoch - 10ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0258 - val_loss: 0.0291 - lr: 1.0000e-04 - 415ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0258 - val_loss: 0.0261 - lr: 1.0000e-04 - 426ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0278 - val_loss: 0.0284 - lr: 1.0000e-04 - 361ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0258 - val_loss: 0.0303 - lr: 1.0000e-04 - 388ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0223 - val_loss: 0.0307 - lr: 1.0000e-05 - 410ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0262 - val_loss: 0.0304 - lr: 1.0000e-05 - 391ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0246 - val_loss: 0.0309 - lr: 1.0000e-05 - 388ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0240 - val_loss: 0.0315 - lr: 1.0000e-05 - 395ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0226 - val_loss: 0.0319 - lr: 1.0000e-05 - 395ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0244 - val_loss: 0.0322 - lr: 1.0000e-05 - 399ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0234 - val_loss: 0.0318 - lr: 1.0000e-05 - 448ms/epoch - 10ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0248 - val_loss: 0.0319 - lr: 1.0000e-05 - 405ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0235 - val_loss: 0.0318 - lr: 1.0000e-05 - 391ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0225 - val_loss: 0.0322 - lr: 1.0000e-05 - 350ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0240 - val_loss: 0.0322 - lr: 1.0000e-05 - 401ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0250 - val_loss: 0.0323 - lr: 1.0000e-05 - 407ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0247 - val_loss: 0.0330 - lr: 1.0000e-05 - 381ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0237 - val_loss: 0.0336 - lr: 1.0000e-05 - 375ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0235 - val_loss: 0.0340 - lr: 1.0000e-05 - 359ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0223 - val_loss: 0.0341 - lr: 1.0000e-05 - 429ms/epoch - 10ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0214 - val_loss: 0.0346 - lr: 1.0000e-05 - 430ms/epoch - 10ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0241 - val_loss: 0.0347 - lr: 1.0000e-05 - 399ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0220 - val_loss: 0.0359 - lr: 1.0000e-05 - 392ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0217 - val_loss: 0.0356 - lr: 1.0000e-05 - 389ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0232 - val_loss: 0.0345 - lr: 1.0000e-05 - 381ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0248 - val_loss: 0.0340 - lr: 1.0000e-05 - 372ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0236 - val_loss: 0.0344 - lr: 1.0000e-05 - 391ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0232 - val_loss: 0.0348 - lr: 1.0000e-05 - 401ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0234 - val_loss: 0.0341 - lr: 1.0000e-05 - 376ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0204 - val_loss: 0.0340 - lr: 1.0000e-05 - 388ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0245 - val_loss: 0.0349 - lr: 1.0000e-05 - 385ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0219 - val_loss: 0.0355 - lr: 1.0000e-05 - 441ms/epoch - 10ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0220 - val_loss: 0.0353 - lr: 1.0000e-05 - 413ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0220 - val_loss: 0.0357 - lr: 1.0000e-05 - 359ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0219 - val_loss: 0.0367 - lr: 1.0000e-05 - 388ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0220 - val_loss: 0.0361 - lr: 1.0000e-05 - 421ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0236 - val_loss: 0.0353 - lr: 1.0000e-05 - 419ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0222 - val_loss: 0.0359 - lr: 1.0000e-05 - 493ms/epoch - 11ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0249 - val_loss: 0.0355 - lr: 1.0000e-05 - 470ms/epoch - 10ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0228 - val_loss: 0.0357 - lr: 1.0000e-05 - 383ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0229 - val_loss: 0.0348 - lr: 1.0000e-05 - 408ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0229 - val_loss: 0.0345 - lr: 1.0000e-05 - 393ms/epoch - 9ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0221 - val_loss: 0.0336 - lr: 1.0000e-05 - 469ms/epoch - 10ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01025
45/45 - 0s - loss: 0.0253 - val_loss: 0.0328 - lr: 1.0000e-05 - 427ms/epoch - 9ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.29635655476263 
RMSE:	 6.268680607174258 
MAPE:	 5.07416987111494

EMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 30.25057164247689 
RMSE:	 5.50005196725239 
MAPE:	 4.444486270049439

WMA
Prediction vs Close:		57.84% Accuracy
Prediction vs Prediction:	41.79% Accuracy
MSE:	 64.92643731055051 
RMSE:	 8.057694292448089 
MAPE:	 6.289003841800032

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 52.98651621196421 
RMSE:	 7.279183760008 
MAPE:	 5.725540843661134

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 33.6026987861483 
RMSE:	 5.796783486223054 
MAPE:	 4.518981487962124
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.35 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4212.289, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3747.746, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.27 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3523.401, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3387.759, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.42 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.00 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3389.758, Time=0.20 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.562 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1689.879
Date:                Sun, 12 Dec 2021   AIC                           3387.759
Time:                        13:29:32   BIC                           3406.522
Sample:                             0   HQIC                          3394.964
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1878      0.003   -345.315      0.000      -1.195      -1.181
ar.L2         -0.8876      0.007   -121.809      0.000      -0.902      -0.873
ar.L3         -0.3957      0.007    -60.127      0.000      -0.409      -0.383
sigma2         3.8904      0.020    193.404      0.000       3.851       3.930
===================================================================================
Ljung-Box (L1) (Q):                  13.21   Jarque-Bera (JB):           1659080.01
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.08   Skew:                             3.28
Prob(H) (two-sided):                  0.00   Kurtosis:                       225.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_5 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_5 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.02821, saving model to LSTM1.h5
58/58 - 2s - loss: 0.1914 - val_loss: 0.0282 - lr: 0.0010 - 2s/epoch - 40ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.02821 to 0.02136, saving model to LSTM1.h5
58/58 - 1s - loss: 0.0663 - val_loss: 0.0214 - lr: 0.0010 - 502ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.02136 to 0.00821, saving model to LSTM1.h5
58/58 - 1s - loss: 0.1170 - val_loss: 0.0082 - lr: 0.0010 - 514ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0619 - val_loss: 0.5512 - lr: 0.0010 - 496ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0423 - val_loss: 0.4569 - lr: 0.0010 - 519ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0480 - val_loss: 0.2682 - lr: 0.0010 - 491ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0383 - val_loss: 0.0790 - lr: 0.0010 - 483ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0367 - val_loss: 0.2302 - lr: 0.0010 - 494ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0332 - val_loss: 0.2024 - lr: 1.0000e-04 - 449ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0316 - val_loss: 0.1622 - lr: 1.0000e-04 - 484ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0313 - val_loss: 0.1290 - lr: 1.0000e-04 - 497ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0296 - val_loss: 0.1063 - lr: 1.0000e-04 - 465ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0318 - val_loss: 0.0985 - lr: 1.0000e-04 - 521ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0288 - val_loss: 0.0967 - lr: 1.0000e-05 - 471ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0306 - val_loss: 0.0950 - lr: 1.0000e-05 - 482ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0295 - val_loss: 0.0936 - lr: 1.0000e-05 - 486ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0285 - val_loss: 0.0924 - lr: 1.0000e-05 - 469ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0288 - val_loss: 0.0918 - lr: 1.0000e-05 - 522ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0278 - val_loss: 0.0913 - lr: 1.0000e-05 - 500ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0280 - val_loss: 0.0904 - lr: 1.0000e-05 - 479ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0282 - val_loss: 0.0887 - lr: 1.0000e-05 - 509ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0291 - val_loss: 0.0879 - lr: 1.0000e-05 - 496ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0282 - val_loss: 0.0867 - lr: 1.0000e-05 - 502ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0297 - val_loss: 0.0863 - lr: 1.0000e-05 - 484ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0264 - val_loss: 0.0848 - lr: 1.0000e-05 - 512ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0282 - val_loss: 0.0843 - lr: 1.0000e-05 - 482ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0270 - val_loss: 0.0849 - lr: 1.0000e-05 - 452ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0292 - val_loss: 0.0852 - lr: 1.0000e-05 - 456ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0273 - val_loss: 0.0833 - lr: 1.0000e-05 - 461ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0299 - val_loss: 0.0808 - lr: 1.0000e-05 - 490ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0283 - val_loss: 0.0796 - lr: 1.0000e-05 - 526ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0271 - val_loss: 0.0778 - lr: 1.0000e-05 - 502ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0291 - val_loss: 0.0749 - lr: 1.0000e-05 - 503ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0270 - val_loss: 0.0752 - lr: 1.0000e-05 - 480ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0268 - val_loss: 0.0763 - lr: 1.0000e-05 - 493ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0271 - val_loss: 0.0781 - lr: 1.0000e-05 - 436ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0276 - val_loss: 0.0782 - lr: 1.0000e-05 - 524ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0273 - val_loss: 0.0801 - lr: 1.0000e-05 - 488ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0295 - val_loss: 0.0792 - lr: 1.0000e-05 - 499ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0288 - val_loss: 0.0789 - lr: 1.0000e-05 - 478ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0294 - val_loss: 0.0793 - lr: 1.0000e-05 - 503ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0272 - val_loss: 0.0760 - lr: 1.0000e-05 - 462ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0283 - val_loss: 0.0758 - lr: 1.0000e-05 - 484ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0253 - val_loss: 0.0797 - lr: 1.0000e-05 - 449ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0293 - val_loss: 0.0781 - lr: 1.0000e-05 - 524ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0254 - val_loss: 0.0773 - lr: 1.0000e-05 - 467ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0271 - val_loss: 0.0796 - lr: 1.0000e-05 - 492ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0275 - val_loss: 0.0797 - lr: 1.0000e-05 - 477ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0275 - val_loss: 0.0832 - lr: 1.0000e-05 - 484ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00821
58/58 - 1s - loss: 0.0309 - val_loss: 0.0801 - lr: 1.0000e-05 - 522ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0292 - val_loss: 0.0798 - lr: 1.0000e-05 - 481ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0282 - val_loss: 0.0797 - lr: 1.0000e-05 - 456ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0260 - val_loss: 0.0784 - lr: 1.0000e-05 - 488ms/epoch - 8ms/step
Epoch 00053: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.29635655476263 
RMSE:	 6.268680607174258 
MAPE:	 5.07416987111494

EMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 30.25057164247689 
RMSE:	 5.50005196725239 
MAPE:	 4.444486270049439

WMA
Prediction vs Close:		57.84% Accuracy
Prediction vs Prediction:	41.79% Accuracy
MSE:	 64.92643731055051 
RMSE:	 8.057694292448089 
MAPE:	 6.289003841800032

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 52.98651621196421 
RMSE:	 7.279183760008 
MAPE:	 5.725540843661134

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 33.6026987861483 
RMSE:	 5.796783486223054 
MAPE:	 4.518981487962124

MIDPOINT
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 22.7414116550439 
RMSE:	 4.768795618921396 
MAPE:	 3.944458615157319
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.36 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4414.515, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3944.062, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.38 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3715.173, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3577.471, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.45 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.62 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3579.471, Time=0.19 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.248 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1784.736
Date:                Sun, 12 Dec 2021   AIC                           3577.471
Time:                        13:31:25   BIC                           3596.235
Sample:                             0   HQIC                          3584.677
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.844      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.861      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.862      0.000      -0.410      -0.387
sigma2         4.9242      0.023    215.469      0.000       4.879       4.969
===================================================================================
Ljung-Box (L1) (Q):                  14.55   Jarque-Bera (JB):           2468024.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       274.15
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_6 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_6 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05392, saving model to LSTM1.h5
43/43 - 2s - loss: 0.3047 - val_loss: 0.0539 - lr: 0.0010 - 2s/epoch - 44ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.05392 to 0.04661, saving model to LSTM1.h5
43/43 - 0s - loss: 0.1255 - val_loss: 0.0466 - lr: 0.0010 - 395ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.1140 - val_loss: 0.6637 - lr: 0.0010 - 405ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0770 - val_loss: 0.3570 - lr: 0.0010 - 367ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0437 - val_loss: 0.1212 - lr: 0.0010 - 431ms/epoch - 10ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0622 - val_loss: 0.1236 - lr: 0.0010 - 337ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0564 - val_loss: 0.0531 - lr: 0.0010 - 356ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0514 - val_loss: 0.0655 - lr: 1.0000e-04 - 344ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0376 - val_loss: 0.0718 - lr: 1.0000e-04 - 330ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0350 - val_loss: 0.0798 - lr: 1.0000e-04 - 407ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0333 - val_loss: 0.0862 - lr: 1.0000e-04 - 420ms/epoch - 10ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0375 - val_loss: 0.0874 - lr: 1.0000e-04 - 372ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0340 - val_loss: 0.0880 - lr: 1.0000e-05 - 438ms/epoch - 10ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0332 - val_loss: 0.0875 - lr: 1.0000e-05 - 370ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0303 - val_loss: 0.0875 - lr: 1.0000e-05 - 359ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0312 - val_loss: 0.0880 - lr: 1.0000e-05 - 430ms/epoch - 10ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0349 - val_loss: 0.0887 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0334 - val_loss: 0.0883 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0351 - val_loss: 0.0886 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0309 - val_loss: 0.0893 - lr: 1.0000e-05 - 376ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0317 - val_loss: 0.0891 - lr: 1.0000e-05 - 464ms/epoch - 11ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0304 - val_loss: 0.0889 - lr: 1.0000e-05 - 402ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0320 - val_loss: 0.0883 - lr: 1.0000e-05 - 392ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0328 - val_loss: 0.0887 - lr: 1.0000e-05 - 392ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0321 - val_loss: 0.0899 - lr: 1.0000e-05 - 349ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0349 - val_loss: 0.0907 - lr: 1.0000e-05 - 355ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0318 - val_loss: 0.0909 - lr: 1.0000e-05 - 377ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0353 - val_loss: 0.0912 - lr: 1.0000e-05 - 334ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0324 - val_loss: 0.0916 - lr: 1.0000e-05 - 402ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0320 - val_loss: 0.0913 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0326 - val_loss: 0.0905 - lr: 1.0000e-05 - 387ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0351 - val_loss: 0.0901 - lr: 1.0000e-05 - 393ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0325 - val_loss: 0.0902 - lr: 1.0000e-05 - 375ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0305 - val_loss: 0.0903 - lr: 1.0000e-05 - 360ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0274 - val_loss: 0.0902 - lr: 1.0000e-05 - 420ms/epoch - 10ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0316 - val_loss: 0.0892 - lr: 1.0000e-05 - 463ms/epoch - 11ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0324 - val_loss: 0.0885 - lr: 1.0000e-05 - 431ms/epoch - 10ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0343 - val_loss: 0.0880 - lr: 1.0000e-05 - 374ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0273 - val_loss: 0.0873 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0306 - val_loss: 0.0871 - lr: 1.0000e-05 - 368ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0311 - val_loss: 0.0876 - lr: 1.0000e-05 - 349ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0342 - val_loss: 0.0866 - lr: 1.0000e-05 - 422ms/epoch - 10ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0299 - val_loss: 0.0860 - lr: 1.0000e-05 - 397ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0306 - val_loss: 0.0855 - lr: 1.0000e-05 - 425ms/epoch - 10ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0309 - val_loss: 0.0858 - lr: 1.0000e-05 - 336ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0297 - val_loss: 0.0853 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0313 - val_loss: 0.0837 - lr: 1.0000e-05 - 383ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0272 - val_loss: 0.0840 - lr: 1.0000e-05 - 411ms/epoch - 10ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0307 - val_loss: 0.0844 - lr: 1.0000e-05 - 440ms/epoch - 10ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0339 - val_loss: 0.0841 - lr: 1.0000e-05 - 355ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0286 - val_loss: 0.0844 - lr: 1.0000e-05 - 431ms/epoch - 10ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.04661
43/43 - 0s - loss: 0.0311 - val_loss: 0.0838 - lr: 1.0000e-05 - 401ms/epoch - 9ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.29635655476263 
RMSE:	 6.268680607174258 
MAPE:	 5.07416987111494

EMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 30.25057164247689 
RMSE:	 5.50005196725239 
MAPE:	 4.444486270049439

WMA
Prediction vs Close:		57.84% Accuracy
Prediction vs Prediction:	41.79% Accuracy
MSE:	 64.92643731055051 
RMSE:	 8.057694292448089 
MAPE:	 6.289003841800032

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 52.98651621196421 
RMSE:	 7.279183760008 
MAPE:	 5.725540843661134

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 33.6026987861483 
RMSE:	 5.796783486223054 
MAPE:	 4.518981487962124

MIDPOINT
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 22.7414116550439 
RMSE:	 4.768795618921396 
MAPE:	 3.944458615157319

T3
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 71.14601272964887 
RMSE:	 8.434809584670473 
MAPE:	 6.848574357624394
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.49 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4352.703, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3889.412, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.26 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3689.930, Time=0.06 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3574.245, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.24 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.84 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3576.245, Time=0.22 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.299 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1783.123
Date:                Sun, 12 Dec 2021   AIC                           3574.245
Time:                        13:32:51   BIC                           3593.008
Sample:                             0   HQIC                          3581.451
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1480      0.004   -302.430      0.000      -1.155      -1.141
ar.L2         -0.8300      0.008    -99.682      0.000      -0.846      -0.814
ar.L3         -0.3687      0.007    -50.527      0.000      -0.383      -0.354
sigma2         4.9055      0.028    175.970      0.000       4.851       4.960
===================================================================================
Ljung-Box (L1) (Q):                  11.61   Jarque-Bera (JB):           1261976.58
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.16   Skew:                             2.52
Prob(H) (two-sided):                  0.00   Kurtosis:                       196.90
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_7 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_7 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04929, saving model to LSTM1.h5
90/90 - 2s - loss: 0.2001 - val_loss: 0.0493 - lr: 0.0010 - 2s/epoch - 25ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04929
90/90 - 1s - loss: 0.1000 - val_loss: 0.4574 - lr: 0.0010 - 742ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04929
90/90 - 1s - loss: 0.0796 - val_loss: 0.5206 - lr: 0.0010 - 774ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04929
90/90 - 1s - loss: 0.0593 - val_loss: 0.1143 - lr: 0.0010 - 742ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04929
90/90 - 1s - loss: 0.0503 - val_loss: 0.1298 - lr: 0.0010 - 730ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.04929 to 0.02034, saving model to LSTM1.h5
90/90 - 1s - loss: 0.0385 - val_loss: 0.0203 - lr: 0.0010 - 902ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0496 - val_loss: 0.3909 - lr: 0.0010 - 781ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0404 - val_loss: 0.1832 - lr: 0.0010 - 752ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0335 - val_loss: 0.0762 - lr: 0.0010 - 739ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0276 - val_loss: 0.0831 - lr: 0.0010 - 758ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0246 - val_loss: 0.0913 - lr: 0.0010 - 733ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0346 - val_loss: 0.0851 - lr: 1.0000e-04 - 743ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0245 - val_loss: 0.0867 - lr: 1.0000e-04 - 700ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0208 - val_loss: 0.0913 - lr: 1.0000e-04 - 680ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0193 - val_loss: 0.0880 - lr: 1.0000e-04 - 722ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0218 - val_loss: 0.0926 - lr: 1.0000e-04 - 716ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0230 - val_loss: 0.0940 - lr: 1.0000e-05 - 692ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0201 - val_loss: 0.0942 - lr: 1.0000e-05 - 771ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0188 - val_loss: 0.0936 - lr: 1.0000e-05 - 716ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0195 - val_loss: 0.0932 - lr: 1.0000e-05 - 819ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0195 - val_loss: 0.0932 - lr: 1.0000e-05 - 743ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0219 - val_loss: 0.0940 - lr: 1.0000e-05 - 727ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0211 - val_loss: 0.0941 - lr: 1.0000e-05 - 711ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0209 - val_loss: 0.0942 - lr: 1.0000e-05 - 693ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0205 - val_loss: 0.0958 - lr: 1.0000e-05 - 704ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0199 - val_loss: 0.0967 - lr: 1.0000e-05 - 763ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0206 - val_loss: 0.0963 - lr: 1.0000e-05 - 738ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0205 - val_loss: 0.0947 - lr: 1.0000e-05 - 704ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0201 - val_loss: 0.0938 - lr: 1.0000e-05 - 725ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0212 - val_loss: 0.0931 - lr: 1.0000e-05 - 728ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0192 - val_loss: 0.0921 - lr: 1.0000e-05 - 726ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0213 - val_loss: 0.0924 - lr: 1.0000e-05 - 685ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0199 - val_loss: 0.0932 - lr: 1.0000e-05 - 704ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0195 - val_loss: 0.0930 - lr: 1.0000e-05 - 774ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0188 - val_loss: 0.0930 - lr: 1.0000e-05 - 712ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0194 - val_loss: 0.0922 - lr: 1.0000e-05 - 697ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0182 - val_loss: 0.0928 - lr: 1.0000e-05 - 699ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0199 - val_loss: 0.0923 - lr: 1.0000e-05 - 708ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0178 - val_loss: 0.0904 - lr: 1.0000e-05 - 729ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0201 - val_loss: 0.0898 - lr: 1.0000e-05 - 729ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0203 - val_loss: 0.0895 - lr: 1.0000e-05 - 734ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0198 - val_loss: 0.0913 - lr: 1.0000e-05 - 765ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0202 - val_loss: 0.0894 - lr: 1.0000e-05 - 722ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0198 - val_loss: 0.0917 - lr: 1.0000e-05 - 733ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0215 - val_loss: 0.0926 - lr: 1.0000e-05 - 730ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0197 - val_loss: 0.0949 - lr: 1.0000e-05 - 786ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0193 - val_loss: 0.0944 - lr: 1.0000e-05 - 749ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0171 - val_loss: 0.0946 - lr: 1.0000e-05 - 710ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0193 - val_loss: 0.0973 - lr: 1.0000e-05 - 788ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0196 - val_loss: 0.0998 - lr: 1.0000e-05 - 741ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0199 - val_loss: 0.1008 - lr: 1.0000e-05 - 741ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0202 - val_loss: 0.1000 - lr: 1.0000e-05 - 730ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0192 - val_loss: 0.1023 - lr: 1.0000e-05 - 725ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0202 - val_loss: 0.1038 - lr: 1.0000e-05 - 813ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0170 - val_loss: 0.1030 - lr: 1.0000e-05 - 725ms/epoch - 8ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.02034
90/90 - 1s - loss: 0.0201 - val_loss: 0.1042 - lr: 1.0000e-05 - 788ms/epoch - 9ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.29635655476263 
RMSE:	 6.268680607174258 
MAPE:	 5.07416987111494

EMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 30.25057164247689 
RMSE:	 5.50005196725239 
MAPE:	 4.444486270049439

WMA
Prediction vs Close:		57.84% Accuracy
Prediction vs Prediction:	41.79% Accuracy
MSE:	 64.92643731055051 
RMSE:	 8.057694292448089 
MAPE:	 6.289003841800032

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 52.98651621196421 
RMSE:	 7.279183760008 
MAPE:	 5.725540843661134

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 33.6026987861483 
RMSE:	 5.796783486223054 
MAPE:	 4.518981487962124

MIDPOINT
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 22.7414116550439 
RMSE:	 4.768795618921396 
MAPE:	 3.944458615157319

T3
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 71.14601272964887 
RMSE:	 8.434809584670473 
MAPE:	 6.848574357624394

TEMA
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	49.25% Accuracy
MSE:	 26.608302345887367 
RMSE:	 5.158323598407468 
MAPE:	 4.336839144940602
Runtime: mins: 13.275118494316668

Architecture used

In [85]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment1.png to Experiment1 (1).png
In [86]:
img = cv2.imread('Experiment1.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[86]:
<matplotlib.image.AxesImage at 0x7f75c3c67b10>

Excess kurtosis is a metric that compares the kurtosis of a distribution against the kurtosis of a normal distribution. The kurtosis of a normal distribution equals 3. Therefore, the excess kurtosis is found using the formula below:

Excess Kurtosis = Kurtosis – 3

Model Plots

In [87]:
np.save("X_train_appl.npy", X_train)
np.save("y_train_appl.npy", y_train)
np.save("X_test_appl.npy", X_test)
np.save("y_test_appl.npy", y_test)
np.save("yc_train_appl.npy", yc_train)
np.save("yc_test_appl.npy", yc_test)
np.save('index_train_appl.npy', index_train)
np.save('index_test_appl.npy', index_test)
In [88]:
list(simulation1.keys())
Out[88]:
['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT', 'T3', 'TEMA']
In [97]:
import json
with open('simulation1_data.json') as json_file:
    simulation1 = json.load(json_file)
fileimg = 'Experiment1'
In [98]:
for i in range(len(list(simulation1.keys()))):
  SIM = list(simulation1.keys())[i]
  plot_train(simulation1,SIM)
  plot_test(simulation1,SIM)
----- Train RMSE for SMA ----- 8.0413849019624
----- Train_MSE_LSTM for SMA ----- 64.66387114150885
----- Train MAE LSTM for SMA ----- 6.984172090422362
----- Test RMSE for SMA----- 6.268680607174258
----- Test_MSE_LSTM for SMA----- 39.29635655476263
----- Test_MAE_LSTM for SMA----- 5.07416987111494
----- Train RMSE for EMA ----- 9.093795784259413
----- Train_MSE_LSTM for EMA ----- 82.69712176581427
----- Train MAE LSTM for EMA ----- 7.9494118779796255
----- Test RMSE for EMA----- 5.50005196725239
----- Test_MSE_LSTM for EMA----- 30.25057164247689
----- Test_MAE_LSTM for EMA----- 4.444486270049439
----- Train RMSE for WMA ----- 9.540832646041645
----- Train_MSE_LSTM for WMA ----- 91.02748757977402
----- Train MAE LSTM for WMA ----- 8.461359613734103
----- Test RMSE for WMA----- 8.057694292448089
----- Test_MSE_LSTM for WMA----- 64.92643731055051
----- Test_MAE_LSTM for WMA----- 6.289003841800032
----- Train RMSE for DEMA ----- 11.100597362863033
----- Train_MSE_LSTM for DEMA ----- 123.2232618124017
----- Train MAE LSTM for DEMA ----- 9.911796547735277
----- Test RMSE for DEMA----- 7.279183760008
----- Test_MSE_LSTM for DEMA----- 52.98651621196421
----- Test_MAE_LSTM for DEMA----- 5.725540843661134
----- Train RMSE for KAMA ----- 9.711048663776115
----- Train_MSE_LSTM for KAMA ----- 94.30446615022788
----- Train MAE LSTM for KAMA ----- 8.63911047031167
----- Test RMSE for KAMA----- 5.796783486223054
----- Test_MSE_LSTM for KAMA----- 33.6026987861483
----- Test_MAE_LSTM for KAMA----- 4.518981487962124
----- Train RMSE for MIDPOINT ----- 8.42723685446347
----- Train_MSE_LSTM for MIDPOINT ----- 71.01832100122736
----- Train MAE LSTM for MIDPOINT ----- 7.433581155738655
----- Test RMSE for MIDPOINT----- 4.768795618921396
----- Test_MSE_LSTM for MIDPOINT----- 22.7414116550439
----- Test_MAE_LSTM for MIDPOINT----- 3.944458615157319
----- Train RMSE for T3 ----- 10.78572680059317
----- Train_MSE_LSTM for T3 ----- 116.33190261703379
----- Train MAE LSTM for T3 ----- 9.683946313650006
----- Test RMSE for T3----- 8.434809584670473
----- Test_MSE_LSTM for T3----- 71.14601272964887
----- Test_MAE_LSTM for T3----- 6.848574357624394
----- Train RMSE for TEMA ----- 6.938756727032321
----- Train_MSE_LSTM for TEMA ----- 48.14634491693629
----- Train MAE LSTM for TEMA ----- 4.664446043887978
----- Test RMSE for TEMA----- 5.158323598407468
----- Test_MSE_LSTM for TEMA----- 26.608302345887367
----- Test_MAE_LSTM for TEMA----- 4.336839144940602

Univariate Arima Multistep MutiVariate LSTM Hybrid Model Experiment 2

In [96]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # # Option 1
    # # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # option 2
    model = Sequential()
    model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    model.add(Dense(64))
    model.add(Dense(units=output_dim))
    model.compile(optimizer=Adam(learning_rate = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM2.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()




    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [97]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation2 = {}
    imgfile = 'Experiment2'
    for ma in optimized_period:
              print(ma)
              print(functions[ma])
              print ( int( optimized_period[ma]))
            # if ma == 'SMA':
              low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
              low_vol = low_vol.fillna(0)
              low_vol_data = df['close']
              high_vol = pd.DataFrame()
              df2 = df.copy()
              for i in df2.columns:
                if i in low_vol.columns:
                  high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
              high_vol_data = df['close']
              ## *****************************************************
              # Generate ARIMA and LSTM predictions
              print('\nWorking on ' + ma + ' predictions')
              try:
                print('parameters used : ', train_len, test_len)
                low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
              except:
                  print('ARIMA error, skipping to next MA type')
                  continue
              Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
              final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
              mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
              rmse_ftr = mse_ftr ** 0.5
              mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
              mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

              final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
              mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
              rmse = mse ** 0.5
              mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              # Generate prediction accuracy
              actual = df['close'].tail(test_len).values
              result_1 = []
              result_2 = []
              for i in range(1, len(final_prediction)):
                  # Compare prediction to previous close price
                  if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                      result_1.append(1)
                  elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                      result_1.append(1)
                  else:
                      result_1.append(0)

                  # Compare prediction to previous prediction
                  if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                      result_2.append(1)
                  elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                      result_2.append(1)
                  else:
                      result_2.append(0)

              accuracy_1 = np.mean(result_1)
              accuracy_2 = np.mean(result_2)

              simulation2[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                            'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                            'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                            'rmse': rmse_ftr, 'mae' : mae_ftr},
                                'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                          'rmse': rmse, 'mae': mae },
                                'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

              # save simulation data here as checkpoint
              with open('simulation2_data.json', 'w') as fp:
                  json.dump(simulation2, fp)

              for ma in simulation2.keys():
                  print('\n' + ma)
                  print('Prediction vs Close:\t\t' + str(round(100*simulation2[ma]['accuracy']['prediction vs close'], 2))
                        + '% Accuracy')
                  print('Prediction vs Prediction:\t' + str(round(100*simulation2[ma]['accuracy']['prediction vs prediction'], 2))
                        + '% Accuracy')
                  print('MSE:\t', simulation2[ma]['final']['mse'],
                        '\nRMSE:\t', simulation2[ma]['final']['rmse'],
                        '\nMAPE:\t', simulation2[ma]['final']['mae'])#,
                        # '\nMAPE:\t', simulation[ma]['final']['mape'])
            # else:
            #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.49 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4157.020, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3687.148, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.21 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3458.651, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3322.133, Time=0.08 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.76 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.86 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3324.133, Time=0.24 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.790 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1657.067
Date:                Sun, 12 Dec 2021   AIC                           3322.133
Time:                        14:00:25   BIC                           3340.897
Sample:                             0   HQIC                          3329.339
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1966      0.003   -387.226      0.000      -1.203      -1.191
ar.L2         -0.8952      0.006   -138.692      0.000      -0.908      -0.883
ar.L3         -0.3968      0.006    -68.284      0.000      -0.408      -0.385
sigma2         3.5858      0.017    214.535      0.000       3.553       3.619
===================================================================================
Ljung-Box (L1) (Q):                  14.47   Jarque-Bera (JB):           2428881.42
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       271.99
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.06118, saving model to LSTM2.h5
48/48 - 4s - loss: 0.1411 - accuracy: 0.0000e+00 - val_loss: 0.0612 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 82ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.06118
48/48 - 0s - loss: 0.0648 - accuracy: 0.0000e+00 - val_loss: 0.0751 - val_accuracy: 0.0037 - lr: 0.0010 - 259ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.06118 to 0.01076, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0226 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 0.0010 - 300ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01076
48/48 - 0s - loss: 0.0058 - accuracy: 0.0000e+00 - val_loss: 0.0202 - val_accuracy: 0.0037 - lr: 0.0010 - 280ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.01076 to 0.00547, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0055 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 0.0010 - 324ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00547
48/48 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 0.0010 - 280ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.00547 to 0.00540, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 0.0010 - 289ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00540
48/48 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 0.0010 - 270ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00540
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 0.0010 - 264ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.00540
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0128 - val_accuracy: 0.0037 - lr: 0.0010 - 276ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00540
48/48 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 262ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00540
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0158 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 288ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00540
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0170 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 286ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00540
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0182 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 292ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.00540
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0193 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 286ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00540
48/48 - 0s - loss: 9.1783e-04 - accuracy: 0.0000e+00 - val_loss: 0.0193 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00540
48/48 - 0s - loss: 9.1492e-04 - accuracy: 0.0000e+00 - val_loss: 0.0193 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00540
48/48 - 0s - loss: 9.1247e-04 - accuracy: 0.0000e+00 - val_loss: 0.0193 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00540
48/48 - 0s - loss: 9.1016e-04 - accuracy: 0.0000e+00 - val_loss: 0.0194 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.00540
48/48 - 0s - loss: 9.0794e-04 - accuracy: 0.0000e+00 - val_loss: 0.0194 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00540
48/48 - 0s - loss: 9.0577e-04 - accuracy: 0.0000e+00 - val_loss: 0.0195 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00540
48/48 - 0s - loss: 9.0365e-04 - accuracy: 0.0000e+00 - val_loss: 0.0195 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 294ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00540
48/48 - 0s - loss: 9.0156e-04 - accuracy: 0.0000e+00 - val_loss: 0.0196 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 293ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.9952e-04 - accuracy: 0.0000e+00 - val_loss: 0.0197 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 252ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.9750e-04 - accuracy: 0.0000e+00 - val_loss: 0.0198 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.9550e-04 - accuracy: 0.0000e+00 - val_loss: 0.0198 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.9353e-04 - accuracy: 0.0000e+00 - val_loss: 0.0199 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.9158e-04 - accuracy: 0.0000e+00 - val_loss: 0.0200 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.8965e-04 - accuracy: 0.0000e+00 - val_loss: 0.0201 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 295ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.8772e-04 - accuracy: 0.0000e+00 - val_loss: 0.0202 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.8581e-04 - accuracy: 0.0000e+00 - val_loss: 0.0203 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 250ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.8390e-04 - accuracy: 0.0000e+00 - val_loss: 0.0204 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.8200e-04 - accuracy: 0.0000e+00 - val_loss: 0.0205 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.8011e-04 - accuracy: 0.0000e+00 - val_loss: 0.0206 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 286ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.7822e-04 - accuracy: 0.0000e+00 - val_loss: 0.0207 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.7632e-04 - accuracy: 0.0000e+00 - val_loss: 0.0209 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.7443e-04 - accuracy: 0.0000e+00 - val_loss: 0.0210 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.7254e-04 - accuracy: 0.0000e+00 - val_loss: 0.0211 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.7065e-04 - accuracy: 0.0000e+00 - val_loss: 0.0212 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.6876e-04 - accuracy: 0.0000e+00 - val_loss: 0.0213 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.6687e-04 - accuracy: 0.0000e+00 - val_loss: 0.0215 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 298ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.6497e-04 - accuracy: 0.0000e+00 - val_loss: 0.0216 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.6308e-04 - accuracy: 0.0000e+00 - val_loss: 0.0217 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.6118e-04 - accuracy: 0.0000e+00 - val_loss: 0.0219 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.5929e-04 - accuracy: 0.0000e+00 - val_loss: 0.0220 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.5739e-04 - accuracy: 0.0000e+00 - val_loss: 0.0221 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.5550e-04 - accuracy: 0.0000e+00 - val_loss: 0.0223 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 310ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.5360e-04 - accuracy: 0.0000e+00 - val_loss: 0.0224 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.5170e-04 - accuracy: 0.0000e+00 - val_loss: 0.0225 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 293ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.4981e-04 - accuracy: 0.0000e+00 - val_loss: 0.0227 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 286ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.4791e-04 - accuracy: 0.0000e+00 - val_loss: 0.0228 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 307ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.4602e-04 - accuracy: 0.0000e+00 - val_loss: 0.0229 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 260ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.4413e-04 - accuracy: 0.0000e+00 - val_loss: 0.0231 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.4224e-04 - accuracy: 0.0000e+00 - val_loss: 0.0232 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.4036e-04 - accuracy: 0.0000e+00 - val_loss: 0.0233 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.3847e-04 - accuracy: 0.0000e+00 - val_loss: 0.0235 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00540
48/48 - 0s - loss: 8.3659e-04 - accuracy: 0.0000e+00 - val_loss: 0.0236 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 00057: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 39.87524298067742 
RMSE:	 6.3146847095225125 
MAPE:	 5.088204561858354
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.41 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4231.556, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3761.238, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.28 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3532.227, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3394.496, Time=0.10 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.81 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.66 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3396.496, Time=0.22 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.636 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1693.248
Date:                Sun, 12 Dec 2021   AIC                           3394.496
Time:                        14:01:56   BIC                           3413.260
Sample:                             0   HQIC                          3401.702
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.569      0.000      -1.204      -1.192
ar.L2         -0.8976      0.006   -139.811      0.000      -0.910      -0.885
ar.L3         -0.3984      0.006    -68.662      0.000      -0.410      -0.387
sigma2         3.9230      0.018    215.372      0.000       3.887       3.959
===================================================================================
Ljung-Box (L1) (Q):                  14.54   Jarque-Bera (JB):           2462173.05
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.82
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04789, saving model to LSTM2.h5
16/16 - 4s - loss: 0.1362 - accuracy: 0.0000e+00 - val_loss: 0.0479 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 234ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.04789 to 0.01125, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0586 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 0.0010 - 126ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01125
16/16 - 0s - loss: 0.0240 - accuracy: 0.0000e+00 - val_loss: 0.0633 - val_accuracy: 0.0037 - lr: 0.0010 - 111ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01125
16/16 - 0s - loss: 0.0143 - accuracy: 0.0000e+00 - val_loss: 0.0290 - val_accuracy: 0.0037 - lr: 0.0010 - 100ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01125
16/16 - 0s - loss: 0.0108 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.01125 to 0.00912, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0236 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 0.0010 - 124ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00912
16/16 - 0s - loss: 0.0040 - accuracy: 0.0000e+00 - val_loss: 0.0363 - val_accuracy: 0.0037 - lr: 0.0010 - 96ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.00912 to 0.00408, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0155 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 0.0010 - 128ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0078 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 0.0010 - 125ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0150 - accuracy: 0.0000e+00 - val_loss: 0.0163 - val_accuracy: 0.0037 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0069 - accuracy: 0.0000e+00 - val_loss: 0.0181 - val_accuracy: 0.0037 - lr: 0.0010 - 117ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0113 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 0.0010 - 107ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00013: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0124 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0104 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 98ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 109ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 115ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 99ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00018: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 135ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00023: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00408
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.9942e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.9609e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.9281e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.8958e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.8640e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.8326e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.8018e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.7714e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.7416e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.7123e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.6835e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.6552e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.6274e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.6001e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.5734e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.5472e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.5214e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.4962e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.4715e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.4472e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.4235e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.4002e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.3773e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00408
16/16 - 0s - loss: 9.3550e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 00058: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 39.87524298067742 
RMSE:	 6.3146847095225125 
MAPE:	 5.088204561858354

EMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 73.6938994303845 
RMSE:	 8.584515095821342 
MAPE:	 7.207683534137507
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.40 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4264.089, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3793.930, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.25 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3564.923, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3427.258, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.26 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.48 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3429.258, Time=0.19 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.840 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1709.629
Date:                Sun, 12 Dec 2021   AIC                           3427.258
Time:                        14:03:16   BIC                           3446.021
Sample:                             0   HQIC                          3434.464
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1981      0.003   -389.386      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.699      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.737      0.000      -0.410      -0.387
sigma2         4.0860      0.019    215.311      0.000       4.049       4.123
===================================================================================
Ljung-Box (L1) (Q):                  14.57   Jarque-Bera (JB):           2460901.70
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.00916, saving model to LSTM2.h5
17/17 - 4s - loss: 0.0891 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 237ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.00916
17/17 - 0s - loss: 0.0709 - accuracy: 0.0000e+00 - val_loss: 0.0856 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.00916
17/17 - 0s - loss: 0.0175 - accuracy: 0.0000e+00 - val_loss: 0.0689 - val_accuracy: 0.0037 - lr: 0.0010 - 114ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.00916 to 0.00554, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0238 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 0.0010 - 143ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00554
17/17 - 0s - loss: 0.0163 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 0.0010 - 122ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00554
17/17 - 0s - loss: 0.0210 - accuracy: 0.0000e+00 - val_loss: 0.0453 - val_accuracy: 0.0037 - lr: 0.0010 - 103ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00554
17/17 - 0s - loss: 0.0146 - accuracy: 0.0000e+00 - val_loss: 0.0367 - val_accuracy: 0.0037 - lr: 0.0010 - 108ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00554
17/17 - 0s - loss: 0.0165 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 0.0010 - 103ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.00554 to 0.00500, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0169 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 0.0010 - 135ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00500
17/17 - 0s - loss: 0.0069 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00500
17/17 - 0s - loss: 0.0106 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 0.0010 - 101ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.00500 to 0.00478, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0041 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 0.0010 - 123ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00478
17/17 - 0s - loss: 0.0083 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 0.0010 - 114ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00478
17/17 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0157 - val_accuracy: 0.0037 - lr: 0.0010 - 107ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.00478 to 0.00474, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0050 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 0.0010 - 126ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.00474 to 0.00383, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0026 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 0.0010 - 124ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00383
17/17 - 0s - loss: 0.0042 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 0.0010 - 104ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00383
17/17 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00383
17/17 - 0s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 0.0010 - 132ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00383
17/17 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00021: val_loss did not improve from 0.00383
17/17 - 0s - loss: 0.0040 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 0.0010 - 125ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00383
17/17 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 110ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00383
17/17 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 107ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00383
17/17 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 120ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00383
17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 98ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00026: val_loss did not improve from 0.00383
17/17 - 0s - loss: 9.5297e-04 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 117ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00383
17/17 - 0s - loss: 9.0436e-04 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00383
17/17 - 0s - loss: 9.0057e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.9689e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.9330e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00031: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.8980e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.8639e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.8308e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.7985e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.7672e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.7368e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.7074e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.6788e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.6512e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.6245e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.5987e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.5738e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.5498e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.5266e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.5043e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.4828e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.4622e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.4423e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.4232e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.4049e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.3873e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.3704e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.3542e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.3387e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.3238e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.3095e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.2958e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 122ms/epoch - 7ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.2827e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.2701e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.2580e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 121ms/epoch - 7ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.2465e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.2354e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.2247e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.2145e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.2047e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00383
17/17 - 0s - loss: 8.1953e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 00066: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 39.87524298067742 
RMSE:	 6.3146847095225125 
MAPE:	 5.088204561858354

EMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 73.6938994303845 
RMSE:	 8.584515095821342 
MAPE:	 7.207683534137507

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 76.90233333404232 
RMSE:	 8.76939754681257 
MAPE:	 7.121360770950225
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.42 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4436.126, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3965.317, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.38 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3736.589, Time=0.06 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3598.951, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.97 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.94 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3600.951, Time=0.20 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.128 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1795.475
Date:                Sun, 12 Dec 2021   AIC                           3598.951
Time:                        14:04:40   BIC                           3617.714
Sample:                             0   HQIC                          3606.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1983      0.003   -389.581      0.000      -1.204      -1.192
ar.L2         -0.8973      0.006   -139.732      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.649      0.000      -0.410      -0.387
sigma2         5.0573      0.023    215.292      0.000       5.011       5.103
===================================================================================
Ljung-Box (L1) (Q):                  14.41   Jarque-Bera (JB):           2460553.80
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.89
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05011, saving model to LSTM2.h5
10/10 - 4s - loss: 0.1439 - accuracy: 0.0000e+00 - val_loss: 0.0501 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 412ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.05011 to 0.00966, saving model to LSTM2.h5
10/10 - 0s - loss: 0.1247 - accuracy: 0.0000e+00 - val_loss: 0.0097 - val_accuracy: 0.0037 - lr: 0.0010 - 98ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0865 - accuracy: 0.0000e+00 - val_loss: 0.1573 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 73ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0390 - accuracy: 0.0000e+00 - val_loss: 0.0470 - val_accuracy: 0.0037 - lr: 0.0010 - 74ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0124 - accuracy: 0.0000e+00 - val_loss: 0.0357 - val_accuracy: 0.0037 - lr: 0.0010 - 67ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.0244 - val_accuracy: 0.0037 - lr: 0.0010 - 71ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0171 - val_accuracy: 0.0037 - lr: 0.0010 - 73ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0183 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 69ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0200 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 75ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0214 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 69ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0222 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 76ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0225 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 77ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0226 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0226 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0226 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0226 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0226 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0226 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0226 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0227 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0227 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0227 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0227 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0228 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0228 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0228 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0229 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0229 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0229 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0229 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0230 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0230 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0230 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0231 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0231 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0231 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0231 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0232 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0232 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0232 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0233 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0233 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0233 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0233 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0234 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0234 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0234 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0234 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0234 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0235 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0235 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00966
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0235 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 39.87524298067742 
RMSE:	 6.3146847095225125 
MAPE:	 5.088204561858354

EMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 73.6938994303845 
RMSE:	 8.584515095821342 
MAPE:	 7.207683534137507

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 76.90233333404232 
RMSE:	 8.76939754681257 
MAPE:	 7.121360770950225

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 131.0602137141292 
RMSE:	 11.448153288374908 
MAPE:	 10.329401784343453
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.36 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4190.464, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3724.371, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.29 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3494.154, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3357.435, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.28 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.78 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3359.435, Time=0.19 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.204 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1674.717
Date:                Sun, 12 Dec 2021   AIC                           3357.435
Time:                        14:05:53   BIC                           3376.198
Sample:                             0   HQIC                          3364.641
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1955      0.003   -381.246      0.000      -1.202      -1.189
ar.L2         -0.8964      0.007   -135.835      0.000      -0.909      -0.883
ar.L3         -0.3971      0.006    -67.229      0.000      -0.409      -0.385
sigma2         3.7466      0.018    211.623      0.000       3.712       3.781
===================================================================================
Ljung-Box (L1) (Q):                  14.20   Jarque-Bera (JB):           2338363.32
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             3.76
Prob(H) (two-sided):                  0.00   Kurtosis:                       266.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.15923, saving model to LSTM2.h5
45/45 - 4s - loss: 0.1495 - accuracy: 0.0000e+00 - val_loss: 0.1592 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 86ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.15923 to 0.01635, saving model to LSTM2.h5
45/45 - 0s - loss: 0.0565 - accuracy: 0.0000e+00 - val_loss: 0.0164 - val_accuracy: 0.0037 - lr: 0.0010 - 322ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01635
45/45 - 0s - loss: 0.0149 - accuracy: 0.0000e+00 - val_loss: 0.0477 - val_accuracy: 0.0037 - lr: 0.0010 - 255ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01635 to 0.00499, saving model to LSTM2.h5
45/45 - 0s - loss: 0.0160 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 0.0010 - 285ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00499
45/45 - 0s - loss: 0.0066 - accuracy: 0.0000e+00 - val_loss: 0.0185 - val_accuracy: 0.0037 - lr: 0.0010 - 253ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00499 to 0.00365, saving model to LSTM2.h5
45/45 - 0s - loss: 0.0046 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 0.0010 - 270ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 0.0010 - 237ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 0.0010 - 238ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 0.0010 - 256ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 0.0010 - 292ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 0.0010 - 280ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 259ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 251ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 282ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0095 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 273ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 270ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00365
45/45 - 0s - loss: 9.0282e-04 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.8947e-04 - accuracy: 0.0000e+00 - val_loss: 0.0105 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.8125e-04 - accuracy: 0.0000e+00 - val_loss: 0.0106 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.7539e-04 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.7045e-04 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.6589e-04 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.6154e-04 - accuracy: 0.0000e+00 - val_loss: 0.0109 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.5733e-04 - accuracy: 0.0000e+00 - val_loss: 0.0109 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.5323e-04 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 287ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.4924e-04 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.4535e-04 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 296ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.4156e-04 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.3788e-04 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 290ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.3430e-04 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.3082e-04 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.2745e-04 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.2418e-04 - accuracy: 0.0000e+00 - val_loss: 0.0115 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 309ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.2101e-04 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.1794e-04 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.1497e-04 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.1210e-04 - accuracy: 0.0000e+00 - val_loss: 0.0118 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.0933e-04 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.0665e-04 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.0406e-04 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00365
45/45 - 0s - loss: 8.0157e-04 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.9917e-04 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.9686e-04 - accuracy: 0.0000e+00 - val_loss: 0.0123 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.9464e-04 - accuracy: 0.0000e+00 - val_loss: 0.0124 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.9250e-04 - accuracy: 0.0000e+00 - val_loss: 0.0124 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.9045e-04 - accuracy: 0.0000e+00 - val_loss: 0.0125 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.8848e-04 - accuracy: 0.0000e+00 - val_loss: 0.0126 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.8659e-04 - accuracy: 0.0000e+00 - val_loss: 0.0127 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.8477e-04 - accuracy: 0.0000e+00 - val_loss: 0.0128 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.8303e-04 - accuracy: 0.0000e+00 - val_loss: 0.0128 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.8136e-04 - accuracy: 0.0000e+00 - val_loss: 0.0129 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.7977e-04 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.7824e-04 - accuracy: 0.0000e+00 - val_loss: 0.0131 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.7678e-04 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 282ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.7538e-04 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00365
45/45 - 0s - loss: 7.7404e-04 - accuracy: 0.0000e+00 - val_loss: 0.0133 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 39.87524298067742 
RMSE:	 6.3146847095225125 
MAPE:	 5.088204561858354

EMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 73.6938994303845 
RMSE:	 8.584515095821342 
MAPE:	 7.207683534137507

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 76.90233333404232 
RMSE:	 8.76939754681257 
MAPE:	 7.121360770950225

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 131.0602137141292 
RMSE:	 11.448153288374908 
MAPE:	 10.329401784343453

KAMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 42.128331463904196 
RMSE:	 6.49063413418937 
MAPE:	 5.225311201416733
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.36 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4212.289, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3747.746, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.23 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3523.401, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3387.759, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.22 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.85 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3389.758, Time=0.22 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.118 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1689.879
Date:                Sun, 12 Dec 2021   AIC                           3387.759
Time:                        14:07:24   BIC                           3406.522
Sample:                             0   HQIC                          3394.964
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1878      0.003   -345.315      0.000      -1.195      -1.181
ar.L2         -0.8876      0.007   -121.809      0.000      -0.902      -0.873
ar.L3         -0.3957      0.007    -60.127      0.000      -0.409      -0.383
sigma2         3.8904      0.020    193.404      0.000       3.851       3.930
===================================================================================
Ljung-Box (L1) (Q):                  13.21   Jarque-Bera (JB):           1659080.01
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.08   Skew:                             3.28
Prob(H) (two-sided):                  0.00   Kurtosis:                       225.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.01067, saving model to LSTM2.h5
58/58 - 4s - loss: 0.2600 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 70ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.01067
58/58 - 0s - loss: 0.0424 - accuracy: 0.0000e+00 - val_loss: 0.0374 - val_accuracy: 0.0037 - lr: 0.0010 - 333ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01067
58/58 - 0s - loss: 0.0045 - accuracy: 0.0000e+00 - val_loss: 0.0623 - val_accuracy: 0.0037 - lr: 0.0010 - 343ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01067 to 0.00479, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0145 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 0.0010 - 351ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.00479 to 0.00428, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0033 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 0.0010 - 341ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0046 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 0.0010 - 322ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0042 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 0.0010 - 340ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0074 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 0.0010 - 298ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0154 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 0.0010 - 315ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0342 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 0.0010 - 338ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0489 - accuracy: 0.0000e+00 - val_loss: 0.0907 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 307ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0171 - accuracy: 0.0000e+00 - val_loss: 0.0417 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 305ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0096 - accuracy: 0.0000e+00 - val_loss: 0.0307 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 330ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0068 - accuracy: 0.0000e+00 - val_loss: 0.0230 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 321ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0048 - accuracy: 0.0000e+00 - val_loss: 0.0181 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 316ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0034 - accuracy: 0.0000e+00 - val_loss: 0.0172 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 343ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0032 - accuracy: 0.0000e+00 - val_loss: 0.0165 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0031 - accuracy: 0.0000e+00 - val_loss: 0.0159 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 317ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 354ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0149 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 341ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 337ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 307ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0026 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 348ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 328ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0129 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 316ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0125 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 330ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 363ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 324ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 366ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 318ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0095 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 313ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 307ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 312ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 346ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 338ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 339ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 312ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 321ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 316ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 353ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 393ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 345ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 320ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 325ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 299ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 403ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00428
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 339ms/epoch - 6ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 39.87524298067742 
RMSE:	 6.3146847095225125 
MAPE:	 5.088204561858354

EMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 73.6938994303845 
RMSE:	 8.584515095821342 
MAPE:	 7.207683534137507

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 76.90233333404232 
RMSE:	 8.76939754681257 
MAPE:	 7.121360770950225

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 131.0602137141292 
RMSE:	 11.448153288374908 
MAPE:	 10.329401784343453

KAMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 42.128331463904196 
RMSE:	 6.49063413418937 
MAPE:	 5.225311201416733

MIDPOINT
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 102.72361049298229 
RMSE:	 10.135265684380567 
MAPE:	 8.372939983805384
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.35 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4414.515, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3944.062, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.38 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3715.173, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3577.471, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.49 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.65 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3579.471, Time=0.18 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.292 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1784.736
Date:                Sun, 12 Dec 2021   AIC                           3577.471
Time:                        14:09:00   BIC                           3596.235
Sample:                             0   HQIC                          3584.677
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.844      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.861      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.862      0.000      -0.410      -0.387
sigma2         4.9242      0.023    215.469      0.000       4.879       4.969
===================================================================================
Ljung-Box (L1) (Q):                  14.55   Jarque-Bera (JB):           2468024.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       274.15
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.07207, saving model to LSTM2.h5
43/43 - 5s - loss: 0.1415 - accuracy: 0.0000e+00 - val_loss: 0.0721 - val_accuracy: 0.0037 - lr: 0.0010 - 5s/epoch - 111ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.07207 to 0.03513, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0578 - accuracy: 0.0000e+00 - val_loss: 0.0351 - val_accuracy: 0.0037 - lr: 0.0010 - 243ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03513
43/43 - 0s - loss: 0.0359 - accuracy: 0.0000e+00 - val_loss: 0.0626 - val_accuracy: 0.0037 - lr: 0.0010 - 220ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.03513 to 0.00755, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0163 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 0.0010 - 245ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00755
43/43 - 0s - loss: 0.0112 - accuracy: 0.0000e+00 - val_loss: 0.0190 - val_accuracy: 0.0037 - lr: 0.0010 - 232ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00755 to 0.00438, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0046 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 0.0010 - 272ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00438
43/43 - 0s - loss: 0.0037 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 0.0010 - 281ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00438
43/43 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 0.0010 - 279ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00438
43/43 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 0.0010 - 241ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00438
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 0.0010 - 280ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.00438
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 0.0010 - 239ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00438
43/43 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 247ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00438
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 267ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00438
43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 252ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00438
43/43 - 0s - loss: 9.7475e-04 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 269ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.00438
43/43 - 0s - loss: 9.1016e-04 - accuracy: 0.0000e+00 - val_loss: 0.0128 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 246ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.6487e-04 - accuracy: 0.0000e+00 - val_loss: 0.0129 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.5760e-04 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.5277e-04 - accuracy: 0.0000e+00 - val_loss: 0.0131 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.4948e-04 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.4691e-04 - accuracy: 0.0000e+00 - val_loss: 0.0133 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.4467e-04 - accuracy: 0.0000e+00 - val_loss: 0.0133 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.4258e-04 - accuracy: 0.0000e+00 - val_loss: 0.0134 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.4055e-04 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 237ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.3856e-04 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 238ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.3660e-04 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.3464e-04 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 237ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.3270e-04 - accuracy: 0.0000e+00 - val_loss: 0.0137 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 244ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.3077e-04 - accuracy: 0.0000e+00 - val_loss: 0.0137 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.2885e-04 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.2695e-04 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.2506e-04 - accuracy: 0.0000e+00 - val_loss: 0.0139 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.2320e-04 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.2135e-04 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.1952e-04 - accuracy: 0.0000e+00 - val_loss: 0.0141 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.1772e-04 - accuracy: 0.0000e+00 - val_loss: 0.0141 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.1593e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.1418e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.1244e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.1073e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.0905e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.0739e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.0576e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.0415e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.0257e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 245ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00438
43/43 - 0s - loss: 8.0102e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00438
43/43 - 0s - loss: 7.9949e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00438
43/43 - 0s - loss: 7.9798e-04 - accuracy: 0.0000e+00 - val_loss: 0.0149 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00438
43/43 - 0s - loss: 7.9650e-04 - accuracy: 0.0000e+00 - val_loss: 0.0150 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 260ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00438
43/43 - 0s - loss: 7.9504e-04 - accuracy: 0.0000e+00 - val_loss: 0.0150 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00438
43/43 - 0s - loss: 7.9361e-04 - accuracy: 0.0000e+00 - val_loss: 0.0151 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00438
43/43 - 0s - loss: 7.9220e-04 - accuracy: 0.0000e+00 - val_loss: 0.0152 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00438
43/43 - 0s - loss: 7.9081e-04 - accuracy: 0.0000e+00 - val_loss: 0.0152 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00438
43/43 - 0s - loss: 7.8944e-04 - accuracy: 0.0000e+00 - val_loss: 0.0153 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00438
43/43 - 0s - loss: 7.8809e-04 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00438
43/43 - 0s - loss: 7.8677e-04 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 260ms/epoch - 6ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 39.87524298067742 
RMSE:	 6.3146847095225125 
MAPE:	 5.088204561858354

EMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 73.6938994303845 
RMSE:	 8.584515095821342 
MAPE:	 7.207683534137507

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 76.90233333404232 
RMSE:	 8.76939754681257 
MAPE:	 7.121360770950225

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 131.0602137141292 
RMSE:	 11.448153288374908 
MAPE:	 10.329401784343453

KAMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 42.128331463904196 
RMSE:	 6.49063413418937 
MAPE:	 5.225311201416733

MIDPOINT
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 102.72361049298229 
RMSE:	 10.135265684380567 
MAPE:	 8.372939983805384

T3
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 106.78678832331099 
RMSE:	 10.333769318274479 
MAPE:	 8.334873887974807
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.45 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4352.703, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3889.412, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.26 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3689.930, Time=0.06 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3574.245, Time=0.08 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.10 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.75 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3576.245, Time=0.17 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.950 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1783.123
Date:                Sun, 12 Dec 2021   AIC                           3574.245
Time:                        14:10:25   BIC                           3593.008
Sample:                             0   HQIC                          3581.451
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1480      0.004   -302.430      0.000      -1.155      -1.141
ar.L2         -0.8300      0.008    -99.682      0.000      -0.846      -0.814
ar.L3         -0.3687      0.007    -50.527      0.000      -0.383      -0.354
sigma2         4.9055      0.028    175.970      0.000       4.851       4.960
===================================================================================
Ljung-Box (L1) (Q):                  11.61   Jarque-Bera (JB):           1261976.58
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.16   Skew:                             2.52
Prob(H) (two-sided):                  0.00   Kurtosis:                       196.90
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.12996, saving model to LSTM2.h5
90/90 - 4s - loss: 0.1545 - accuracy: 0.0000e+00 - val_loss: 0.1300 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 49ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.12996
90/90 - 0s - loss: 0.1665 - accuracy: 0.0000e+00 - val_loss: 0.2427 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 462ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.12996 to 0.00886, saving model to LSTM2.h5
90/90 - 0s - loss: 0.0386 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 0.0010 - 484ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.00886
90/90 - 0s - loss: 0.0097 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 0.0010 - 459ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00886
90/90 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0226 - val_accuracy: 0.0037 - lr: 0.0010 - 480ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00886
90/90 - 0s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0153 - val_accuracy: 0.0037 - lr: 0.0010 - 474ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00886
90/90 - 1s - loss: 0.0034 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 0.0010 - 544ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.00886
90/90 - 1s - loss: 0.0041 - accuracy: 0.0000e+00 - val_loss: 0.0175 - val_accuracy: 0.0037 - lr: 0.0010 - 505ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.00886 to 0.00880, saving model to LSTM2.h5
90/90 - 1s - loss: 0.0044 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 501ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00880
90/90 - 0s - loss: 0.0044 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 453ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00880
90/90 - 0s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 465ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.00880 to 0.00879, saving model to LSTM2.h5
90/90 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 463ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.00879 to 0.00862, saving model to LSTM2.h5
90/90 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 474ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.00862 to 0.00858, saving model to LSTM2.h5
90/90 - 1s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 583ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00858
90/90 - 1s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 543ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00858
90/90 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 467ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00858
90/90 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 529ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00018: val_loss did not improve from 0.00858
90/90 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 463ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00858
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 485ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00858
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 476ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00858
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0106 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 556ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00858
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 521ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00023: val_loss did not improve from 0.00858
90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 476ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00858
90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 456ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00858
90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 546ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00858
90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0115 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 463ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00858
90/90 - 0s - loss: 9.9422e-04 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 473ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00858
90/90 - 1s - loss: 9.8459e-04 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 549ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00858
90/90 - 0s - loss: 9.7538e-04 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 475ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00858
90/90 - 0s - loss: 9.6655e-04 - accuracy: 0.0000e+00 - val_loss: 0.0123 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 470ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00858
90/90 - 1s - loss: 9.5807e-04 - accuracy: 0.0000e+00 - val_loss: 0.0125 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 529ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00858
90/90 - 0s - loss: 9.4989e-04 - accuracy: 0.0000e+00 - val_loss: 0.0127 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 460ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00858
90/90 - 1s - loss: 9.4200e-04 - accuracy: 0.0000e+00 - val_loss: 0.0129 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 560ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00858
90/90 - 0s - loss: 9.3435e-04 - accuracy: 0.0000e+00 - val_loss: 0.0131 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 475ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00858
90/90 - 0s - loss: 9.2694e-04 - accuracy: 0.0000e+00 - val_loss: 0.0133 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 479ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00858
90/90 - 0s - loss: 9.1975e-04 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 463ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00858
90/90 - 1s - loss: 9.1276e-04 - accuracy: 0.0000e+00 - val_loss: 0.0137 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 540ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00858
90/90 - 1s - loss: 9.0597e-04 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 513ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00858
90/90 - 1s - loss: 8.9935e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 627ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.9291e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 480ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.8664e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 468ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.8052e-04 - accuracy: 0.0000e+00 - val_loss: 0.0149 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 453ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.7456e-04 - accuracy: 0.0000e+00 - val_loss: 0.0151 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 476ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.6874e-04 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 487ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.6307e-04 - accuracy: 0.0000e+00 - val_loss: 0.0156 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 466ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00858
90/90 - 1s - loss: 8.5753e-04 - accuracy: 0.0000e+00 - val_loss: 0.0159 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 534ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.5213e-04 - accuracy: 0.0000e+00 - val_loss: 0.0161 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 474ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00858
90/90 - 1s - loss: 8.4685e-04 - accuracy: 0.0000e+00 - val_loss: 0.0164 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 547ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.4171e-04 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 472ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00858
90/90 - 1s - loss: 8.3668e-04 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 548ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.3178e-04 - accuracy: 0.0000e+00 - val_loss: 0.0172 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 488ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00858
90/90 - 1s - loss: 8.2699e-04 - accuracy: 0.0000e+00 - val_loss: 0.0175 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 589ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00858
90/90 - 1s - loss: 8.2232e-04 - accuracy: 0.0000e+00 - val_loss: 0.0178 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 537ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.1777e-04 - accuracy: 0.0000e+00 - val_loss: 0.0180 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 462ms/epoch - 5ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00858
90/90 - 1s - loss: 8.1333e-04 - accuracy: 0.0000e+00 - val_loss: 0.0183 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 541ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.0900e-04 - accuracy: 0.0000e+00 - val_loss: 0.0186 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 455ms/epoch - 5ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.0478e-04 - accuracy: 0.0000e+00 - val_loss: 0.0189 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 462ms/epoch - 5ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00858
90/90 - 0s - loss: 8.0068e-04 - accuracy: 0.0000e+00 - val_loss: 0.0192 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 464ms/epoch - 5ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00858
90/90 - 0s - loss: 7.9669e-04 - accuracy: 0.0000e+00 - val_loss: 0.0195 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 455ms/epoch - 5ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00858
90/90 - 1s - loss: 7.9282e-04 - accuracy: 0.0000e+00 - val_loss: 0.0198 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 529ms/epoch - 6ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00858
90/90 - 1s - loss: 7.8906e-04 - accuracy: 0.0000e+00 - val_loss: 0.0202 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 547ms/epoch - 6ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00858
90/90 - 0s - loss: 7.8541e-04 - accuracy: 0.0000e+00 - val_loss: 0.0205 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 476ms/epoch - 5ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00858
90/90 - 0s - loss: 7.8187e-04 - accuracy: 0.0000e+00 - val_loss: 0.0208 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 469ms/epoch - 5ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00858
90/90 - 0s - loss: 7.7845e-04 - accuracy: 0.0000e+00 - val_loss: 0.0211 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 475ms/epoch - 5ms/step
Epoch 00064: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 39.87524298067742 
RMSE:	 6.3146847095225125 
MAPE:	 5.088204561858354

EMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 73.6938994303845 
RMSE:	 8.584515095821342 
MAPE:	 7.207683534137507

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 76.90233333404232 
RMSE:	 8.76939754681257 
MAPE:	 7.121360770950225

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 131.0602137141292 
RMSE:	 11.448153288374908 
MAPE:	 10.329401784343453

KAMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 42.128331463904196 
RMSE:	 6.49063413418937 
MAPE:	 5.225311201416733

MIDPOINT
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 102.72361049298229 
RMSE:	 10.135265684380567 
MAPE:	 8.372939983805384

T3
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 106.78678832331099 
RMSE:	 10.333769318274479 
MAPE:	 8.334873887974807

TEMA
Prediction vs Close:		50.75% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 78.66857412996737 
RMSE:	 8.86953066007257 
MAPE:	 7.566655356961055
Runtime: mins: 11.6630284996

Architecture Used

In [98]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
In [99]:
img = cv2.imread('Experiment2.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[99]:
<matplotlib.image.AxesImage at 0x7f75c042c4d0>

Model Plots

In [99]:
import json
with open('simulation2_data.json') as json_file:
    simulation2 = json.load(json_file)
fileimg = 'Experiment2'
In [100]:
for i in range(len(list(simulation2.keys()))):
  SIM = list(simulation2.keys())[i]
  plot_train(simulation2,SIM)
  plot_test(simulation2,SIM)
----- Train RMSE for SMA ----- 8.889283880470046
----- Train_MSE_LSTM for SMA ----- 79.0193679075846
----- Train MAE LSTM for SMA ----- 7.8115937103860835
----- Test RMSE for SMA----- 6.3146847095225125
----- Test_MSE_LSTM for SMA----- 39.87524298067742
----- Test_MAE_LSTM for SMA----- 5.088204561858354
----- Train RMSE for EMA ----- 10.150148453019417
----- Train_MSE_LSTM for EMA ----- 103.02551361833245
----- Train MAE LSTM for EMA ----- 8.952000885856286
----- Test RMSE for EMA----- 8.584515095821342
----- Test_MSE_LSTM for EMA----- 73.6938994303845
----- Test_MAE_LSTM for EMA----- 7.207683534137507
----- Train RMSE for WMA ----- 10.467180278295531
----- Train_MSE_LSTM for WMA ----- 109.56186297833894
----- Train MAE LSTM for WMA ----- 9.323534322736583
----- Test RMSE for WMA----- 8.76939754681257
----- Test_MSE_LSTM for WMA----- 76.90233333404232
----- Test_MAE_LSTM for WMA----- 7.121360770950225
----- Train RMSE for DEMA ----- 12.075408120538793
----- Train_MSE_LSTM for DEMA ----- 145.81548127757424
----- Train MAE LSTM for DEMA ----- 10.813702843841257
----- Test RMSE for DEMA----- 11.448153288374908
----- Test_MSE_LSTM for DEMA----- 131.0602137141292
----- Test_MAE_LSTM for DEMA----- 10.329401784343453
----- Train RMSE for KAMA ----- 10.56352496627593
----- Train_MSE_LSTM for KAMA ----- 111.58805971313488
----- Train MAE LSTM for KAMA ----- 9.51725184300564
----- Test RMSE for KAMA----- 6.49063413418937
----- Test_MSE_LSTM for KAMA----- 42.128331463904196
----- Test_MAE_LSTM for KAMA----- 5.225311201416733
----- Train RMSE for MIDPOINT ----- 9.508048912895521
----- Train_MSE_LSTM for MIDPOINT ----- 90.4029941300137
----- Train MAE LSTM for MIDPOINT ----- 8.394449433621876
----- Test RMSE for MIDPOINT----- 10.135265684380567
----- Test_MSE_LSTM for MIDPOINT----- 102.72361049298229
----- Test_MAE_LSTM for MIDPOINT----- 8.372939983805384
----- Train RMSE for T3 ----- 12.069596897007075
----- Train_MSE_LSTM for T3 ----- 145.67516925624284
----- Train MAE LSTM for T3 ----- 10.870868175127285
----- Test RMSE for T3----- 10.333769318274479
----- Test_MSE_LSTM for T3----- 106.78678832331099
----- Test_MAE_LSTM for T3----- 8.334873887974807
----- Train RMSE for TEMA ----- 7.440796513329569
----- Train_MSE_LSTM for TEMA ----- 55.36545275277747
----- Train MAE LSTM for TEMA ----- 5.170842815856772
----- Test RMSE for TEMA----- 8.86953066007257
----- Test_MSE_LSTM for TEMA----- 78.66857412996737
----- Test_MAE_LSTM for TEMA----- 7.566655356961055

Univariate Arima Multistep MutiVariate LSTM Hybrid Model Experiment 3

In [101]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # # Option 1
    # # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()




    # Option 3
    # define custom activation
    # 
    class Double_Tanh(Activation):
        def __init__(self, activation, **kwargs):
            super(Double_Tanh, self).__init__(activation, **kwargs)
            self.__name__ = 'double_tanh'

    def double_tanh(x):
        return (K.tanh(x) * 2)

    get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
        # Model Generation
    model = Sequential()
    #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    model.add(Dense(1))
    model.add(Activation(double_tanh))
    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM3.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [102]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation3 = {}
    imgfile = 'Experiment3'
    for ma in optimized_period:
              print(ma)
              print(functions[ma])
              print ( int( optimized_period[ma]))
            # if ma == 'SMA':
              low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
              low_vol = low_vol.fillna(0)
              low_vol_data = df['close']
              high_vol = pd.DataFrame()
              df2 = df.copy()
              for i in df2.columns:
                if i in low_vol.columns:
                  high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
              high_vol_data = df['close']
              ## *****************************************************
              # Generate ARIMA and LSTM predictions
              print('\nWorking on ' + ma + ' predictions')
              try:
                print('parameters used : ', train_len, test_len)
                low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
              except:
                  print('ARIMA error, skipping to next MA type')
                  continue
              Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
              final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
              mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
              rmse_ftr = mse_ftr ** 0.5
              mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
              mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

              final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
              mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
              rmse = mse ** 0.5
              mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              # Generate prediction accuracy
              actual = df['close'].tail(test_len).values
              result_1 = []
              result_2 = []
              for i in range(1, len(final_prediction)):
                  # Compare prediction to previous close price
                  if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                      result_1.append(1)
                  elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                      result_1.append(1)
                  else:
                      result_1.append(0)

                  # Compare prediction to previous prediction
                  if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                      result_2.append(1)
                  elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                      result_2.append(1)
                  else:
                      result_2.append(0)

              accuracy_1 = np.mean(result_1)
              accuracy_2 = np.mean(result_2)

              simulation3[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                            'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                            'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                            'rmse': rmse_ftr, 'mae' : mae_ftr},
                                'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                          'rmse': rmse, 'mae': mae },
                                'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

              # save simulation data here as checkpoint
              with open('simulation3_data.json', 'w') as fp:
                  json.dump(simulation3, fp)

              for ma in simulation3.keys():
                  print('\n' + ma)
                  print('Prediction vs Close:\t\t' + str(round(100*simulation3[ma]['accuracy']['prediction vs close'], 2))
                        + '% Accuracy')
                  print('Prediction vs Prediction:\t' + str(round(100*simulation3[ma]['accuracy']['prediction vs prediction'], 2))
                        + '% Accuracy')
                  print('MSE:\t', simulation3[ma]['final']['mse'],
                        '\nRMSE:\t', simulation3[ma]['final']['rmse'],
                        '\nMAPE:\t', simulation3[ma]['final']['mae'])#,
                        # '\nMAPE:\t', simulation[ma]['final']['mape'])
            # else:
            #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.47 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4157.020, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3687.148, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.21 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3458.651, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3322.133, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.74 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.81 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3324.133, Time=0.23 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.688 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1657.067
Date:                Sun, 12 Dec 2021   AIC                           3322.133
Time:                        14:16:48   BIC                           3340.897
Sample:                             0   HQIC                          3329.339
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1966      0.003   -387.226      0.000      -1.203      -1.191
ar.L2         -0.8952      0.006   -138.692      0.000      -0.908      -0.883
ar.L3         -0.3968      0.006    -68.284      0.000      -0.408      -0.385
sigma2         3.5858      0.017    214.535      0.000       3.553       3.619
===================================================================================
Ljung-Box (L1) (Q):                  14.47   Jarque-Bera (JB):           2428881.42
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       271.99
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.31248, saving model to LSTM3.h5
48/48 - 3s - loss: 0.1235 - mse: 0.1235 - mae: 0.2748 - val_loss: 0.3125 - val_mse: 0.3125 - val_mae: 0.5302 - lr: 0.0010 - 3s/epoch - 53ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.31248 to 0.04955, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0349 - mse: 0.0349 - mae: 0.1484 - val_loss: 0.0495 - val_mse: 0.0495 - val_mae: 0.1877 - lr: 0.0010 - 240ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04955 to 0.01856, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0240 - mse: 0.0240 - mae: 0.1235 - val_loss: 0.0186 - val_mse: 0.0186 - val_mae: 0.1079 - lr: 0.0010 - 264ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01856 to 0.01553, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0184 - mse: 0.0184 - mae: 0.1091 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.1005 - lr: 0.0010 - 290ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0144 - mse: 0.0144 - mae: 0.0966 - val_loss: 0.0197 - val_mse: 0.0197 - val_mae: 0.1114 - lr: 0.0010 - 231ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0908 - val_loss: 0.0208 - val_mse: 0.0208 - val_mae: 0.1150 - lr: 0.0010 - 217ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0940 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1262 - lr: 0.0010 - 205ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0134 - mse: 0.0134 - mae: 0.0943 - val_loss: 0.0278 - val_mse: 0.0278 - val_mae: 0.1345 - lr: 0.0010 - 203ms/epoch - 4ms/step
Epoch 9/500

Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00009: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0142 - mse: 0.0142 - mae: 0.0954 - val_loss: 0.0357 - val_mse: 0.0357 - val_mae: 0.1546 - lr: 0.0010 - 209ms/epoch - 4ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0163 - mse: 0.0163 - mae: 0.0991 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1386 - lr: 1.0000e-04 - 229ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0139 - mse: 0.0139 - mae: 0.0954 - val_loss: 0.0304 - val_mse: 0.0304 - val_mae: 0.1413 - lr: 1.0000e-04 - 197ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0846 - val_loss: 0.0307 - val_mse: 0.0307 - val_mae: 0.1419 - lr: 1.0000e-04 - 237ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0830 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1464 - lr: 1.0000e-04 - 268ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00014: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0769 - val_loss: 0.0330 - val_mse: 0.0330 - val_mae: 0.1478 - lr: 1.0000e-04 - 222ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0708 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1488 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0715 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1493 - lr: 1.0000e-05 - 195ms/epoch - 4ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0728 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1502 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0698 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1513 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00019: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0703 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1514 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0698 - val_loss: 0.0347 - val_mse: 0.0347 - val_mae: 0.1522 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0702 - val_loss: 0.0347 - val_mse: 0.0347 - val_mae: 0.1523 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0712 - val_loss: 0.0349 - val_mse: 0.0349 - val_mae: 0.1527 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0670 - val_loss: 0.0350 - val_mse: 0.0350 - val_mae: 0.1530 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0667 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1534 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0643 - val_loss: 0.0355 - val_mse: 0.0355 - val_mae: 0.1542 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0687 - val_loss: 0.0357 - val_mse: 0.0357 - val_mae: 0.1548 - lr: 1.0000e-05 - 207ms/epoch - 4ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0657 - val_loss: 0.0358 - val_mse: 0.0358 - val_mae: 0.1549 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0669 - val_loss: 0.0361 - val_mse: 0.0361 - val_mae: 0.1558 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0674 - val_loss: 0.0361 - val_mse: 0.0361 - val_mae: 0.1558 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0677 - val_loss: 0.0363 - val_mse: 0.0363 - val_mae: 0.1562 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0648 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1570 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0687 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1568 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0655 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1573 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0651 - val_loss: 0.0371 - val_mse: 0.0371 - val_mae: 0.1581 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0653 - val_loss: 0.0372 - val_mse: 0.0372 - val_mae: 0.1583 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0660 - val_loss: 0.0374 - val_mse: 0.0374 - val_mae: 0.1590 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0652 - val_loss: 0.0380 - val_mse: 0.0380 - val_mae: 0.1602 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0638 - val_loss: 0.0383 - val_mse: 0.0383 - val_mae: 0.1610 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0641 - val_loss: 0.0385 - val_mse: 0.0385 - val_mae: 0.1614 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0634 - val_loss: 0.0386 - val_mse: 0.0386 - val_mae: 0.1617 - lr: 1.0000e-05 - 201ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0646 - val_loss: 0.0384 - val_mse: 0.0384 - val_mae: 0.1613 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0633 - val_loss: 0.0384 - val_mse: 0.0384 - val_mae: 0.1614 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0618 - val_loss: 0.0383 - val_mse: 0.0383 - val_mae: 0.1611 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0641 - val_loss: 0.0381 - val_mse: 0.0381 - val_mae: 0.1605 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0600 - val_loss: 0.0383 - val_mse: 0.0383 - val_mae: 0.1612 - lr: 1.0000e-05 - 200ms/epoch - 4ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0647 - val_loss: 0.0388 - val_mse: 0.0388 - val_mae: 0.1623 - lr: 1.0000e-05 - 195ms/epoch - 4ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0636 - val_loss: 0.0390 - val_mse: 0.0390 - val_mae: 0.1627 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0638 - val_loss: 0.0391 - val_mse: 0.0391 - val_mae: 0.1629 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0617 - val_loss: 0.0392 - val_mse: 0.0392 - val_mae: 0.1632 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0624 - val_loss: 0.0394 - val_mse: 0.0394 - val_mae: 0.1638 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0584 - val_loss: 0.0395 - val_mse: 0.0395 - val_mae: 0.1641 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0621 - val_loss: 0.0396 - val_mse: 0.0396 - val_mae: 0.1643 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0624 - val_loss: 0.0395 - val_mse: 0.0395 - val_mae: 0.1639 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01553
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0613 - val_loss: 0.0392 - val_mse: 0.0392 - val_mae: 0.1632 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 00054: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 39.71707698339657 
RMSE:	 6.302148600548591 
MAPE:	 5.205453851116191
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.42 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4231.556, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3761.238, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.30 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3532.227, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3394.496, Time=0.10 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.87 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.67 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3396.496, Time=0.23 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.742 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1693.248
Date:                Sun, 12 Dec 2021   AIC                           3394.496
Time:                        14:18:13   BIC                           3413.260
Sample:                             0   HQIC                          3401.702
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.569      0.000      -1.204      -1.192
ar.L2         -0.8976      0.006   -139.811      0.000      -0.910      -0.885
ar.L3         -0.3984      0.006    -68.662      0.000      -0.410      -0.387
sigma2         3.9230      0.018    215.372      0.000       3.887       3.959
===================================================================================
Ljung-Box (L1) (Q):                  14.54   Jarque-Bera (JB):           2462173.05
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.82
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.40077, saving model to LSTM3.h5
16/16 - 2s - loss: 0.3798 - mse: 0.3798 - mae: 0.4758 - val_loss: 0.4008 - val_mse: 0.4008 - val_mae: 0.6029 - lr: 0.0010 - 2s/epoch - 144ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.40077 to 0.34145, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0757 - mse: 0.0757 - mae: 0.2395 - val_loss: 0.3414 - val_mse: 0.3414 - val_mae: 0.5546 - lr: 0.0010 - 98ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.34145
16/16 - 0s - loss: 0.0330 - mse: 0.0330 - mae: 0.1493 - val_loss: 0.3493 - val_mse: 0.3493 - val_mae: 0.5631 - lr: 0.0010 - 81ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.34145 to 0.27846, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0299 - mse: 0.0299 - mae: 0.1395 - val_loss: 0.2785 - val_mse: 0.2785 - val_mae: 0.4979 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.27846 to 0.24516, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0210 - mse: 0.0210 - mae: 0.1153 - val_loss: 0.2452 - val_mse: 0.2452 - val_mae: 0.4639 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.24516 to 0.22130, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0196 - mse: 0.0196 - mae: 0.1145 - val_loss: 0.2213 - val_mse: 0.2213 - val_mae: 0.4379 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.22130 to 0.20755, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0166 - mse: 0.0166 - mae: 0.1031 - val_loss: 0.2076 - val_mse: 0.2076 - val_mae: 0.4221 - lr: 0.0010 - 128ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.20755 to 0.18812, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.0998 - val_loss: 0.1881 - val_mse: 0.1881 - val_mae: 0.3987 - lr: 0.0010 - 118ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.18812 to 0.18203, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0133 - mse: 0.0133 - mae: 0.0923 - val_loss: 0.1820 - val_mse: 0.1820 - val_mae: 0.3915 - lr: 0.0010 - 106ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.18203 to 0.16938, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0128 - mse: 0.0128 - mae: 0.0896 - val_loss: 0.1694 - val_mse: 0.1694 - val_mae: 0.3756 - lr: 0.0010 - 101ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.16938 to 0.16630, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0847 - val_loss: 0.1663 - val_mse: 0.1663 - val_mae: 0.3715 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.16630 to 0.15845, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0834 - val_loss: 0.1585 - val_mse: 0.1585 - val_mae: 0.3605 - lr: 0.0010 - 97ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.15845 to 0.15831, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0817 - val_loss: 0.1583 - val_mse: 0.1583 - val_mae: 0.3603 - lr: 0.0010 - 96ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.15831 to 0.15122, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0824 - val_loss: 0.1512 - val_mse: 0.1512 - val_mae: 0.3505 - lr: 0.0010 - 90ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.15122
16/16 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0758 - val_loss: 0.1529 - val_mse: 0.1529 - val_mae: 0.3525 - lr: 0.0010 - 81ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.15122 to 0.14976, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0745 - val_loss: 0.1498 - val_mse: 0.1498 - val_mae: 0.3479 - lr: 0.0010 - 106ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.14976 to 0.14433, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0739 - val_loss: 0.1443 - val_mse: 0.1443 - val_mae: 0.3401 - lr: 0.0010 - 132ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.14433 to 0.13994, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0729 - val_loss: 0.1399 - val_mse: 0.1399 - val_mae: 0.3338 - lr: 0.0010 - 111ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.13994
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0678 - val_loss: 0.1414 - val_mse: 0.1414 - val_mae: 0.3357 - lr: 0.0010 - 95ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.13994 to 0.13241, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0698 - val_loss: 0.1324 - val_mse: 0.1324 - val_mae: 0.3227 - lr: 0.0010 - 115ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.13241 to 0.13227, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0659 - val_loss: 0.1323 - val_mse: 0.1323 - val_mae: 0.3228 - lr: 0.0010 - 120ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.13227
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0650 - val_loss: 0.1329 - val_mse: 0.1329 - val_mae: 0.3239 - lr: 0.0010 - 92ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.13227 to 0.12353, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0642 - val_loss: 0.1235 - val_mse: 0.1235 - val_mae: 0.3100 - lr: 0.0010 - 101ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss improved from 0.12353 to 0.12175, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0632 - val_loss: 0.1217 - val_mse: 0.1217 - val_mae: 0.3071 - lr: 0.0010 - 94ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss improved from 0.12175 to 0.11701, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0617 - val_loss: 0.1170 - val_mse: 0.1170 - val_mae: 0.2999 - lr: 0.0010 - 104ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.11701
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0580 - val_loss: 0.1190 - val_mse: 0.1190 - val_mae: 0.3033 - lr: 0.0010 - 90ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss improved from 0.11701 to 0.10937, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0605 - val_loss: 0.1094 - val_mse: 0.1094 - val_mae: 0.2885 - lr: 0.0010 - 91ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.10937
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0597 - val_loss: 0.1104 - val_mse: 0.1104 - val_mae: 0.2904 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss improved from 0.10937 to 0.10331, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0577 - val_loss: 0.1033 - val_mse: 0.1033 - val_mae: 0.2785 - lr: 0.0010 - 117ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0573 - val_loss: 0.1096 - val_mse: 0.1096 - val_mae: 0.2888 - lr: 0.0010 - 83ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0562 - val_loss: 0.1106 - val_mse: 0.1106 - val_mae: 0.2908 - lr: 0.0010 - 84ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0550 - val_loss: 0.1144 - val_mse: 0.1144 - val_mae: 0.2967 - lr: 0.0010 - 78ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0522 - val_loss: 0.1160 - val_mse: 0.1160 - val_mae: 0.2994 - lr: 0.0010 - 79ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00034: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0536 - val_loss: 0.1075 - val_mse: 0.1075 - val_mae: 0.2858 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0560 - val_loss: 0.1077 - val_mse: 0.1077 - val_mae: 0.2862 - lr: 1.0000e-04 - 77ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0523 - val_loss: 0.1081 - val_mse: 0.1081 - val_mae: 0.2870 - lr: 1.0000e-04 - 82ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0499 - val_loss: 0.1082 - val_mse: 0.1082 - val_mae: 0.2871 - lr: 1.0000e-04 - 104ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0509 - val_loss: 0.1078 - val_mse: 0.1078 - val_mae: 0.2866 - lr: 1.0000e-04 - 91ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00039: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0502 - val_loss: 0.1084 - val_mse: 0.1084 - val_mae: 0.2875 - lr: 1.0000e-04 - 112ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0517 - val_loss: 0.1083 - val_mse: 0.1083 - val_mae: 0.2874 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0527 - val_loss: 0.1082 - val_mse: 0.1082 - val_mae: 0.2873 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0529 - val_loss: 0.1082 - val_mse: 0.1082 - val_mae: 0.2872 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0522 - val_loss: 0.1082 - val_mse: 0.1082 - val_mae: 0.2871 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00044: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0527 - val_loss: 0.1081 - val_mse: 0.1081 - val_mae: 0.2871 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0535 - val_loss: 0.1081 - val_mse: 0.1081 - val_mae: 0.2871 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0534 - val_loss: 0.1081 - val_mse: 0.1081 - val_mae: 0.2871 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0529 - val_loss: 0.1082 - val_mse: 0.1082 - val_mae: 0.2872 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0510 - val_loss: 0.1082 - val_mse: 0.1082 - val_mae: 0.2871 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0534 - val_loss: 0.1081 - val_mse: 0.1081 - val_mae: 0.2871 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0512 - val_loss: 0.1082 - val_mse: 0.1082 - val_mae: 0.2871 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0555 - val_loss: 0.1082 - val_mse: 0.1082 - val_mae: 0.2872 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0521 - val_loss: 0.1082 - val_mse: 0.1082 - val_mae: 0.2872 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0545 - val_loss: 0.1081 - val_mse: 0.1081 - val_mae: 0.2871 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0521 - val_loss: 0.1080 - val_mse: 0.1080 - val_mae: 0.2870 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0522 - val_loss: 0.1079 - val_mse: 0.1079 - val_mae: 0.2868 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0518 - val_loss: 0.1079 - val_mse: 0.1079 - val_mae: 0.2867 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0515 - val_loss: 0.1079 - val_mse: 0.1079 - val_mae: 0.2868 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0564 - val_loss: 0.1079 - val_mse: 0.1079 - val_mae: 0.2868 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0535 - val_loss: 0.1078 - val_mse: 0.1078 - val_mae: 0.2866 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0549 - val_loss: 0.1077 - val_mse: 0.1077 - val_mae: 0.2865 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.1077 - val_mse: 0.1077 - val_mae: 0.2864 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0511 - val_loss: 0.1077 - val_mse: 0.1077 - val_mae: 0.2864 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0547 - val_loss: 0.1077 - val_mse: 0.1077 - val_mae: 0.2865 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0522 - val_loss: 0.1077 - val_mse: 0.1077 - val_mae: 0.2865 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0516 - val_loss: 0.1077 - val_mse: 0.1077 - val_mae: 0.2865 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0524 - val_loss: 0.1077 - val_mse: 0.1077 - val_mae: 0.2865 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0522 - val_loss: 0.1076 - val_mse: 0.1076 - val_mae: 0.2863 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0521 - val_loss: 0.1076 - val_mse: 0.1076 - val_mae: 0.2863 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0541 - val_loss: 0.1077 - val_mse: 0.1077 - val_mae: 0.2864 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0526 - val_loss: 0.1076 - val_mse: 0.1076 - val_mae: 0.2863 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0519 - val_loss: 0.1076 - val_mse: 0.1076 - val_mae: 0.2863 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0537 - val_loss: 0.1074 - val_mse: 0.1074 - val_mae: 0.2860 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0539 - val_loss: 0.1073 - val_mse: 0.1073 - val_mae: 0.2858 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0518 - val_loss: 0.1073 - val_mse: 0.1073 - val_mae: 0.2858 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0510 - val_loss: 0.1071 - val_mse: 0.1071 - val_mae: 0.2855 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0517 - val_loss: 0.1070 - val_mse: 0.1070 - val_mae: 0.2853 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0517 - val_loss: 0.1070 - val_mse: 0.1070 - val_mae: 0.2853 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0533 - val_loss: 0.1069 - val_mse: 0.1069 - val_mae: 0.2851 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.10331
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0532 - val_loss: 0.1069 - val_mse: 0.1069 - val_mae: 0.2852 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 00079: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 39.71707698339657 
RMSE:	 6.302148600548591 
MAPE:	 5.205453851116191

EMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 18.546398408821787 
RMSE:	 4.306552961339474 
MAPE:	 3.4160340524918316
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.44 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4264.089, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3793.930, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.25 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3564.923, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3427.258, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.29 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.48 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3429.258, Time=0.18 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.877 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1709.629
Date:                Sun, 12 Dec 2021   AIC                           3427.258
Time:                        14:19:34   BIC                           3446.021
Sample:                             0   HQIC                          3434.464
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1981      0.003   -389.386      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.699      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.737      0.000      -0.410      -0.387
sigma2         4.0860      0.019    215.311      0.000       4.049       4.123
===================================================================================
Ljung-Box (L1) (Q):                  14.57   Jarque-Bera (JB):           2460901.70
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.40392, saving model to LSTM3.h5
17/17 - 3s - loss: 0.6042 - mse: 0.6042 - mae: 0.6609 - val_loss: 0.4039 - val_mse: 0.4039 - val_mae: 0.6126 - lr: 0.0010 - 3s/epoch - 158ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.40392 to 0.25737, saving model to LSTM3.h5
17/17 - 0s - loss: 0.1306 - mse: 0.1306 - mae: 0.3161 - val_loss: 0.2574 - val_mse: 0.2574 - val_mae: 0.4835 - lr: 0.0010 - 106ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.25737
17/17 - 0s - loss: 0.0763 - mse: 0.0763 - mae: 0.2346 - val_loss: 0.2577 - val_mse: 0.2577 - val_mae: 0.4859 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.25737 to 0.19405, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0442 - mse: 0.0442 - mae: 0.1700 - val_loss: 0.1941 - val_mse: 0.1941 - val_mae: 0.4185 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.19405 to 0.13565, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0332 - mse: 0.0332 - mae: 0.1478 - val_loss: 0.1357 - val_mse: 0.1357 - val_mae: 0.3443 - lr: 0.0010 - 114ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.13565 to 0.10726, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0275 - mse: 0.0275 - mae: 0.1331 - val_loss: 0.1073 - val_mse: 0.1073 - val_mae: 0.3024 - lr: 0.0010 - 104ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.10726 to 0.08374, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0231 - mse: 0.0231 - mae: 0.1227 - val_loss: 0.0837 - val_mse: 0.0837 - val_mae: 0.2615 - lr: 0.0010 - 117ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.08374 to 0.06914, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0196 - mse: 0.0196 - mae: 0.1126 - val_loss: 0.0691 - val_mse: 0.0691 - val_mae: 0.2326 - lr: 0.0010 - 95ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.06914 to 0.06246, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0189 - mse: 0.0189 - mae: 0.1095 - val_loss: 0.0625 - val_mse: 0.0625 - val_mae: 0.2185 - lr: 0.0010 - 104ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.06246 to 0.05513, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0154 - mse: 0.0154 - mae: 0.0990 - val_loss: 0.0551 - val_mse: 0.0551 - val_mae: 0.2025 - lr: 0.0010 - 117ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.05513 to 0.04836, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0165 - mse: 0.0165 - mae: 0.1030 - val_loss: 0.0484 - val_mse: 0.0484 - val_mae: 0.1872 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.04836 to 0.04551, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0142 - mse: 0.0142 - mae: 0.0956 - val_loss: 0.0455 - val_mse: 0.0455 - val_mae: 0.1807 - lr: 0.0010 - 103ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.04551 to 0.04381, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0152 - mse: 0.0152 - mae: 0.0977 - val_loss: 0.0438 - val_mse: 0.0438 - val_mae: 0.1773 - lr: 0.0010 - 109ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.04381 to 0.04064, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0128 - mse: 0.0128 - mae: 0.0890 - val_loss: 0.0406 - val_mse: 0.0406 - val_mae: 0.1698 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04064
17/17 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0860 - val_loss: 0.0407 - val_mse: 0.0407 - val_mae: 0.1710 - lr: 0.0010 - 94ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.04064 to 0.03795, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0118 - mse: 0.0118 - mae: 0.0862 - val_loss: 0.0380 - val_mse: 0.0380 - val_mae: 0.1645 - lr: 0.0010 - 115ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.03795
17/17 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0818 - val_loss: 0.0381 - val_mse: 0.0381 - val_mae: 0.1654 - lr: 0.0010 - 103ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.03795 to 0.03378, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0759 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1538 - lr: 0.0010 - 103ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.03378 to 0.03132, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0743 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1473 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.03132 to 0.02844, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0758 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1391 - lr: 0.0010 - 118ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.02844 to 0.02538, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0744 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1298 - lr: 0.0010 - 137ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss improved from 0.02538 to 0.02217, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0729 - val_loss: 0.0222 - val_mse: 0.0222 - val_mae: 0.1196 - lr: 0.0010 - 100ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.02217 to 0.02136, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0700 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1171 - lr: 0.0010 - 109ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02136
17/17 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0654 - val_loss: 0.0221 - val_mse: 0.0221 - val_mae: 0.1194 - lr: 0.0010 - 103ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss improved from 0.02136 to 0.01980, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0672 - val_loss: 0.0198 - val_mse: 0.0198 - val_mae: 0.1110 - lr: 0.0010 - 116ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss improved from 0.01980 to 0.01751, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0654 - val_loss: 0.0175 - val_mse: 0.0175 - val_mae: 0.1029 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss improved from 0.01751 to 0.01636, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0658 - val_loss: 0.0164 - val_mse: 0.0164 - val_mae: 0.0988 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss improved from 0.01636 to 0.01474, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0596 - val_loss: 0.0147 - val_mse: 0.0147 - val_mae: 0.0927 - lr: 0.0010 - 109ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss improved from 0.01474 to 0.01353, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0605 - val_loss: 0.0135 - val_mse: 0.0135 - val_mae: 0.0886 - lr: 0.0010 - 133ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01353
17/17 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0614 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.0945 - lr: 0.0010 - 105ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01353
17/17 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0622 - val_loss: 0.0136 - val_mse: 0.0136 - val_mae: 0.0887 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01353
17/17 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0616 - val_loss: 0.0142 - val_mse: 0.0142 - val_mae: 0.0903 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss improved from 0.01353 to 0.01202, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0601 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0844 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01202
17/17 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0578 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0843 - lr: 0.0010 - 90ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss improved from 0.01202 to 0.01100, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0564 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0814 - lr: 0.0010 - 108ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01100
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0555 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0826 - lr: 0.0010 - 83ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01100
17/17 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0528 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0831 - lr: 0.0010 - 92ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01100
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0553 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0894 - lr: 0.0010 - 91ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01100
17/17 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0542 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0829 - lr: 0.0010 - 101ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00040: val_loss did not improve from 0.01100
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0554 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0808 - lr: 0.0010 - 83ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss improved from 0.01100 to 0.01093, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0542 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0805 - lr: 1.0000e-04 - 118ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0530 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0806 - lr: 1.0000e-04 - 86ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0531 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0809 - lr: 1.0000e-04 - 90ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0514 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0811 - lr: 1.0000e-04 - 101ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00045: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0521 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0812 - lr: 1.0000e-04 - 93ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0534 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0812 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0532 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0812 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0548 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0812 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0528 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0813 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00050: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0514 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0813 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0512 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0813 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0525 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0813 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0539 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0813 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0528 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0813 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0507 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0813 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0518 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0814 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0544 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0813 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0534 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0813 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0528 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0814 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0519 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0815 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0515 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0815 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0492 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0815 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0530 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0815 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0501 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0815 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0528 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0815 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0503 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0816 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0540 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0816 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0498 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0816 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0506 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0816 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0816 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0529 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0816 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0499 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0816 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0490 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0816 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0506 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0817 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0499 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0817 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0528 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0818 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0518 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0818 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0512 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0819 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0514 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0818 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0513 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0819 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0513 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0818 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0515 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0818 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0521 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0818 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 84/500

Epoch 00084: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0498 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0818 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 85/500

Epoch 00085: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0511 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0818 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 86/500

Epoch 00086: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0538 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0818 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 87/500

Epoch 00087: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0509 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0818 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 88/500

Epoch 00088: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0550 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0818 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 89/500

Epoch 00089: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0505 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0819 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 90/500

Epoch 00090: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0520 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0819 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 91/500

Epoch 00091: val_loss did not improve from 0.01093
17/17 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0495 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0818 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 00091: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 39.71707698339657 
RMSE:	 6.302148600548591 
MAPE:	 5.205453851116191

EMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 18.546398408821787 
RMSE:	 4.306552961339474 
MAPE:	 3.4160340524918316

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 100.42411916020163 
RMSE:	 10.021183520932126 
MAPE:	 8.070563561893302
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.44 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4436.126, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3965.317, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.40 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3736.589, Time=0.06 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3598.951, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.98 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.98 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3600.951, Time=0.22 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.253 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1795.475
Date:                Sun, 12 Dec 2021   AIC                           3598.951
Time:                        14:20:56   BIC                           3617.714
Sample:                             0   HQIC                          3606.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1983      0.003   -389.581      0.000      -1.204      -1.192
ar.L2         -0.8973      0.006   -139.732      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.649      0.000      -0.410      -0.387
sigma2         5.0573      0.023    215.292      0.000       5.011       5.103
===================================================================================
Ljung-Box (L1) (Q):                  14.41   Jarque-Bera (JB):           2460553.80
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.89
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.14773, saving model to LSTM3.h5
10/10 - 2s - loss: 0.3200 - mse: 0.3200 - mae: 0.4435 - val_loss: 0.1477 - val_mse: 0.1477 - val_mae: 0.3131 - lr: 0.0010 - 2s/epoch - 228ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.14773 to 0.11676, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0717 - mse: 0.0717 - mae: 0.2237 - val_loss: 0.1168 - val_mse: 0.1168 - val_mae: 0.2746 - lr: 0.0010 - 81ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0523 - mse: 0.0523 - mae: 0.1925 - val_loss: 0.1617 - val_mse: 0.1617 - val_mae: 0.3272 - lr: 0.0010 - 54ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0287 - mse: 0.0287 - mae: 0.1368 - val_loss: 0.2190 - val_mse: 0.2190 - val_mae: 0.3905 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0301 - mse: 0.0301 - mae: 0.1390 - val_loss: 0.2329 - val_mse: 0.2329 - val_mae: 0.4074 - lr: 0.0010 - 51ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0244 - mse: 0.0244 - mae: 0.1254 - val_loss: 0.2363 - val_mse: 0.2363 - val_mae: 0.4131 - lr: 0.0010 - 63ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0192 - mse: 0.0192 - mae: 0.1103 - val_loss: 0.2484 - val_mse: 0.2484 - val_mae: 0.4273 - lr: 0.0010 - 58ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0146 - mse: 0.0146 - mae: 0.0962 - val_loss: 0.2505 - val_mse: 0.2505 - val_mae: 0.4296 - lr: 1.0000e-04 - 58ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0178 - mse: 0.0178 - mae: 0.1065 - val_loss: 0.2523 - val_mse: 0.2523 - val_mae: 0.4315 - lr: 1.0000e-04 - 56ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0168 - mse: 0.0168 - mae: 0.1020 - val_loss: 0.2539 - val_mse: 0.2539 - val_mae: 0.4332 - lr: 1.0000e-04 - 60ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0175 - mse: 0.0175 - mae: 0.1027 - val_loss: 0.2542 - val_mse: 0.2542 - val_mae: 0.4337 - lr: 1.0000e-04 - 65ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0161 - mse: 0.0161 - mae: 0.0996 - val_loss: 0.2549 - val_mse: 0.2549 - val_mae: 0.4346 - lr: 1.0000e-04 - 64ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0156 - mse: 0.0156 - mae: 0.0996 - val_loss: 0.2547 - val_mse: 0.2547 - val_mae: 0.4345 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0157 - mse: 0.0157 - mae: 0.1004 - val_loss: 0.2547 - val_mse: 0.2547 - val_mae: 0.4344 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.0987 - val_loss: 0.2545 - val_mse: 0.2545 - val_mae: 0.4343 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0159 - mse: 0.0159 - mae: 0.1002 - val_loss: 0.2542 - val_mse: 0.2542 - val_mae: 0.4340 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0163 - mse: 0.0163 - mae: 0.1007 - val_loss: 0.2541 - val_mse: 0.2541 - val_mae: 0.4339 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0174 - mse: 0.0174 - mae: 0.1031 - val_loss: 0.2538 - val_mse: 0.2538 - val_mae: 0.4336 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0167 - mse: 0.0167 - mae: 0.1009 - val_loss: 0.2537 - val_mse: 0.2537 - val_mae: 0.4335 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0168 - mse: 0.0168 - mae: 0.1016 - val_loss: 0.2536 - val_mse: 0.2536 - val_mae: 0.4334 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0162 - mse: 0.0162 - mae: 0.1005 - val_loss: 0.2537 - val_mse: 0.2537 - val_mae: 0.4336 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0186 - mse: 0.0186 - mae: 0.1078 - val_loss: 0.2536 - val_mse: 0.2536 - val_mae: 0.4335 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0157 - mse: 0.0157 - mae: 0.1002 - val_loss: 0.2537 - val_mse: 0.2537 - val_mae: 0.4336 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0171 - mse: 0.0171 - mae: 0.1028 - val_loss: 0.2538 - val_mse: 0.2538 - val_mae: 0.4337 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0165 - mse: 0.0165 - mae: 0.1022 - val_loss: 0.2541 - val_mse: 0.2541 - val_mae: 0.4341 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0179 - mse: 0.0179 - mae: 0.1043 - val_loss: 0.2544 - val_mse: 0.2544 - val_mae: 0.4344 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0163 - mse: 0.0163 - mae: 0.1028 - val_loss: 0.2547 - val_mse: 0.2547 - val_mae: 0.4348 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0161 - mse: 0.0161 - mae: 0.0988 - val_loss: 0.2549 - val_mse: 0.2549 - val_mae: 0.4350 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.1008 - val_loss: 0.2550 - val_mse: 0.2550 - val_mae: 0.4351 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0171 - mse: 0.0171 - mae: 0.1019 - val_loss: 0.2550 - val_mse: 0.2550 - val_mae: 0.4352 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0159 - mse: 0.0159 - mae: 0.1002 - val_loss: 0.2551 - val_mse: 0.2551 - val_mae: 0.4352 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0151 - mse: 0.0151 - mae: 0.0987 - val_loss: 0.2551 - val_mse: 0.2551 - val_mae: 0.4353 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0151 - mse: 0.0151 - mae: 0.0983 - val_loss: 0.2551 - val_mse: 0.2551 - val_mae: 0.4353 - lr: 1.0000e-05 - 75ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0152 - mse: 0.0152 - mae: 0.0968 - val_loss: 0.2549 - val_mse: 0.2549 - val_mae: 0.4351 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.1015 - val_loss: 0.2548 - val_mse: 0.2548 - val_mae: 0.4351 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.1019 - val_loss: 0.2549 - val_mse: 0.2549 - val_mae: 0.4351 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.0988 - val_loss: 0.2550 - val_mse: 0.2550 - val_mae: 0.4353 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0154 - mse: 0.0154 - mae: 0.0979 - val_loss: 0.2552 - val_mse: 0.2552 - val_mae: 0.4356 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0153 - mse: 0.0153 - mae: 0.0978 - val_loss: 0.2554 - val_mse: 0.2554 - val_mae: 0.4357 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0170 - mse: 0.0170 - mae: 0.1026 - val_loss: 0.2553 - val_mse: 0.2553 - val_mae: 0.4357 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0163 - mse: 0.0163 - mae: 0.1024 - val_loss: 0.2555 - val_mse: 0.2555 - val_mae: 0.4359 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0163 - mse: 0.0163 - mae: 0.1014 - val_loss: 0.2556 - val_mse: 0.2556 - val_mae: 0.4361 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0147 - mse: 0.0147 - mae: 0.0955 - val_loss: 0.2557 - val_mse: 0.2557 - val_mae: 0.4362 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0162 - mse: 0.0162 - mae: 0.0995 - val_loss: 0.2557 - val_mse: 0.2557 - val_mae: 0.4362 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0167 - mse: 0.0167 - mae: 0.1036 - val_loss: 0.2557 - val_mse: 0.2557 - val_mae: 0.4362 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0165 - mse: 0.0165 - mae: 0.0992 - val_loss: 0.2556 - val_mse: 0.2556 - val_mae: 0.4361 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0163 - mse: 0.0163 - mae: 0.0991 - val_loss: 0.2555 - val_mse: 0.2555 - val_mae: 0.4361 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0152 - mse: 0.0152 - mae: 0.0975 - val_loss: 0.2553 - val_mse: 0.2553 - val_mae: 0.4359 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0167 - mse: 0.0167 - mae: 0.1019 - val_loss: 0.2554 - val_mse: 0.2554 - val_mae: 0.4360 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0147 - mse: 0.0147 - mae: 0.0963 - val_loss: 0.2552 - val_mse: 0.2552 - val_mae: 0.4359 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0158 - mse: 0.0158 - mae: 0.0997 - val_loss: 0.2551 - val_mse: 0.2551 - val_mae: 0.4357 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.11676
10/10 - 0s - loss: 0.0154 - mse: 0.0154 - mae: 0.0962 - val_loss: 0.2551 - val_mse: 0.2551 - val_mae: 0.4358 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 39.71707698339657 
RMSE:	 6.302148600548591 
MAPE:	 5.205453851116191

EMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 18.546398408821787 
RMSE:	 4.306552961339474 
MAPE:	 3.4160340524918316

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 100.42411916020163 
RMSE:	 10.021183520932126 
MAPE:	 8.070563561893302

DEMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 42.724210554814 
RMSE:	 6.536375949623308 
MAPE:	 5.453197818871469
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.34 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4190.464, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3724.371, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.29 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3494.154, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3357.435, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.16 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.74 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3359.435, Time=0.19 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.983 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1674.717
Date:                Sun, 12 Dec 2021   AIC                           3357.435
Time:                        14:22:08   BIC                           3376.198
Sample:                             0   HQIC                          3364.641
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1955      0.003   -381.246      0.000      -1.202      -1.189
ar.L2         -0.8964      0.007   -135.835      0.000      -0.909      -0.883
ar.L3         -0.3971      0.006    -67.229      0.000      -0.409      -0.385
sigma2         3.7466      0.018    211.623      0.000       3.712       3.781
===================================================================================
Ljung-Box (L1) (Q):                  14.20   Jarque-Bera (JB):           2338363.32
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             3.76
Prob(H) (two-sided):                  0.00   Kurtosis:                       266.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.33764, saving model to LSTM3.h5
45/45 - 3s - loss: 0.2028 - mse: 0.2028 - mae: 0.3485 - val_loss: 0.3376 - val_mse: 0.3376 - val_mae: 0.5256 - lr: 0.0010 - 3s/epoch - 62ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.33764 to 0.18076, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0340 - mse: 0.0340 - mae: 0.1503 - val_loss: 0.1808 - val_mse: 0.1808 - val_mae: 0.3622 - lr: 0.0010 - 238ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.18076 to 0.10560, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0182 - mse: 0.0182 - mae: 0.1091 - val_loss: 0.1056 - val_mse: 0.1056 - val_mae: 0.2593 - lr: 0.0010 - 209ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.10560 to 0.07765, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0144 - mse: 0.0144 - mae: 0.0962 - val_loss: 0.0777 - val_mse: 0.0777 - val_mae: 0.2164 - lr: 0.0010 - 233ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.07765 to 0.06452, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0130 - mse: 0.0130 - mae: 0.0903 - val_loss: 0.0645 - val_mse: 0.0645 - val_mae: 0.1964 - lr: 0.0010 - 229ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0831 - val_loss: 0.0673 - val_mse: 0.0673 - val_mae: 0.2017 - lr: 0.0010 - 200ms/epoch - 4ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0820 - val_loss: 0.0738 - val_mse: 0.0738 - val_mae: 0.2127 - lr: 0.0010 - 211ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0756 - val_loss: 0.0688 - val_mse: 0.0688 - val_mae: 0.2047 - lr: 0.0010 - 190ms/epoch - 4ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0732 - val_loss: 0.0700 - val_mse: 0.0700 - val_mae: 0.2064 - lr: 0.0010 - 193ms/epoch - 4ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0697 - val_loss: 0.0715 - val_mse: 0.0715 - val_mae: 0.2094 - lr: 0.0010 - 235ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0667 - val_loss: 0.0713 - val_mse: 0.0713 - val_mae: 0.2091 - lr: 1.0000e-04 - 210ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0623 - val_loss: 0.0726 - val_mse: 0.0726 - val_mae: 0.2111 - lr: 1.0000e-04 - 181ms/epoch - 4ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0676 - val_loss: 0.0737 - val_mse: 0.0737 - val_mae: 0.2131 - lr: 1.0000e-04 - 187ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0667 - val_loss: 0.0729 - val_mse: 0.0729 - val_mae: 0.2117 - lr: 1.0000e-04 - 187ms/epoch - 4ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0632 - val_loss: 0.0741 - val_mse: 0.0741 - val_mae: 0.2137 - lr: 1.0000e-04 - 225ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0628 - val_loss: 0.0742 - val_mse: 0.0742 - val_mae: 0.2139 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0622 - val_loss: 0.0743 - val_mse: 0.0743 - val_mae: 0.2141 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0645 - val_loss: 0.0745 - val_mse: 0.0745 - val_mae: 0.2144 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0644 - val_loss: 0.0745 - val_mse: 0.0745 - val_mae: 0.2145 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0629 - val_loss: 0.0746 - val_mse: 0.0746 - val_mae: 0.2146 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0650 - val_loss: 0.0747 - val_mse: 0.0747 - val_mae: 0.2147 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0682 - val_loss: 0.0748 - val_mse: 0.0748 - val_mae: 0.2149 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0647 - val_loss: 0.0749 - val_mse: 0.0749 - val_mae: 0.2151 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0633 - val_loss: 0.0750 - val_mse: 0.0750 - val_mae: 0.2153 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0620 - val_loss: 0.0751 - val_mse: 0.0751 - val_mae: 0.2154 - lr: 1.0000e-05 - 195ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0635 - val_loss: 0.0750 - val_mse: 0.0750 - val_mae: 0.2153 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0623 - val_loss: 0.0750 - val_mse: 0.0750 - val_mae: 0.2154 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0625 - val_loss: 0.0752 - val_mse: 0.0752 - val_mae: 0.2157 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0607 - val_loss: 0.0754 - val_mse: 0.0754 - val_mae: 0.2161 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0600 - val_loss: 0.0756 - val_mse: 0.0756 - val_mae: 0.2164 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0625 - val_loss: 0.0757 - val_mse: 0.0757 - val_mae: 0.2166 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0636 - val_loss: 0.0757 - val_mse: 0.0757 - val_mae: 0.2165 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0625 - val_loss: 0.0756 - val_mse: 0.0756 - val_mae: 0.2164 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0627 - val_loss: 0.0757 - val_mse: 0.0757 - val_mae: 0.2166 - lr: 1.0000e-05 - 180ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0618 - val_loss: 0.0761 - val_mse: 0.0761 - val_mae: 0.2172 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0616 - val_loss: 0.0763 - val_mse: 0.0763 - val_mae: 0.2176 - lr: 1.0000e-05 - 198ms/epoch - 4ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0657 - val_loss: 0.0764 - val_mse: 0.0764 - val_mae: 0.2179 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0622 - val_loss: 0.0766 - val_mse: 0.0766 - val_mae: 0.2182 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0611 - val_loss: 0.0767 - val_mse: 0.0767 - val_mae: 0.2183 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0620 - val_loss: 0.0769 - val_mse: 0.0769 - val_mae: 0.2187 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0607 - val_loss: 0.0768 - val_mse: 0.0768 - val_mae: 0.2185 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0618 - val_loss: 0.0768 - val_mse: 0.0768 - val_mae: 0.2185 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0604 - val_loss: 0.0766 - val_mse: 0.0766 - val_mae: 0.2182 - lr: 1.0000e-05 - 194ms/epoch - 4ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0624 - val_loss: 0.0767 - val_mse: 0.0767 - val_mae: 0.2183 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0608 - val_loss: 0.0766 - val_mse: 0.0766 - val_mae: 0.2182 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0610 - val_loss: 0.0763 - val_mse: 0.0763 - val_mae: 0.2177 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0612 - val_loss: 0.0764 - val_mse: 0.0764 - val_mae: 0.2178 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0608 - val_loss: 0.0767 - val_mse: 0.0767 - val_mae: 0.2183 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0614 - val_loss: 0.0769 - val_mse: 0.0769 - val_mae: 0.2187 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0621 - val_loss: 0.0772 - val_mse: 0.0772 - val_mae: 0.2192 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0593 - val_loss: 0.0772 - val_mse: 0.0772 - val_mae: 0.2193 - lr: 1.0000e-05 - 199ms/epoch - 4ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0589 - val_loss: 0.0772 - val_mse: 0.0772 - val_mae: 0.2193 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0608 - val_loss: 0.0771 - val_mse: 0.0771 - val_mae: 0.2191 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0598 - val_loss: 0.0771 - val_mse: 0.0771 - val_mae: 0.2191 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.06452
45/45 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0577 - val_loss: 0.0771 - val_mse: 0.0771 - val_mae: 0.2190 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 39.71707698339657 
RMSE:	 6.302148600548591 
MAPE:	 5.205453851116191

EMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 18.546398408821787 
RMSE:	 4.306552961339474 
MAPE:	 3.4160340524918316

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 100.42411916020163 
RMSE:	 10.021183520932126 
MAPE:	 8.070563561893302

DEMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 42.724210554814 
RMSE:	 6.536375949623308 
MAPE:	 5.453197818871469

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.65848136204503 
RMSE:	 4.863998495275778 
MAPE:	 3.972687129543795
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.38 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4212.289, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3747.746, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.24 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3523.401, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3387.759, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.18 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.85 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3389.758, Time=0.19 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.099 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1689.879
Date:                Sun, 12 Dec 2021   AIC                           3387.759
Time:                        14:23:32   BIC                           3406.522
Sample:                             0   HQIC                          3394.964
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1878      0.003   -345.315      0.000      -1.195      -1.181
ar.L2         -0.8876      0.007   -121.809      0.000      -0.902      -0.873
ar.L3         -0.3957      0.007    -60.127      0.000      -0.409      -0.383
sigma2         3.8904      0.020    193.404      0.000       3.851       3.930
===================================================================================
Ljung-Box (L1) (Q):                  13.21   Jarque-Bera (JB):           1659080.01
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.08   Skew:                             3.28
Prob(H) (two-sided):                  0.00   Kurtosis:                       225.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03274, saving model to LSTM3.h5
58/58 - 3s - loss: 0.0795 - mse: 0.0795 - mae: 0.2101 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1384 - lr: 0.0010 - 3s/epoch - 44ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0826 - val_loss: 0.1007 - val_mse: 0.1007 - val_mae: 0.2732 - lr: 0.0010 - 242ms/epoch - 4ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0823 - val_loss: 0.0718 - val_mse: 0.0718 - val_mae: 0.2251 - lr: 0.0010 - 231ms/epoch - 4ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0687 - val_loss: 0.0948 - val_mse: 0.0948 - val_mae: 0.2657 - lr: 0.0010 - 260ms/epoch - 4ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0704 - val_loss: 0.0852 - val_mse: 0.0852 - val_mae: 0.2506 - lr: 0.0010 - 246ms/epoch - 4ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0640 - val_loss: 0.1130 - val_mse: 0.1130 - val_mae: 0.2981 - lr: 0.0010 - 241ms/epoch - 4ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0138 - mse: 0.0138 - mae: 0.0942 - val_loss: 0.0711 - val_mse: 0.0711 - val_mae: 0.2266 - lr: 1.0000e-04 - 258ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0725 - val_loss: 0.0735 - val_mse: 0.0735 - val_mae: 0.2307 - lr: 1.0000e-04 - 243ms/epoch - 4ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0752 - val_loss: 0.0698 - val_mse: 0.0698 - val_mae: 0.2234 - lr: 1.0000e-04 - 270ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0724 - val_loss: 0.0706 - val_mse: 0.0706 - val_mae: 0.2248 - lr: 1.0000e-04 - 247ms/epoch - 4ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0726 - val_loss: 0.0663 - val_mse: 0.0663 - val_mae: 0.2166 - lr: 1.0000e-04 - 262ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0652 - val_loss: 0.0669 - val_mse: 0.0669 - val_mae: 0.2177 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0653 - val_loss: 0.0672 - val_mse: 0.0672 - val_mae: 0.2183 - lr: 1.0000e-05 - 232ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0624 - val_loss: 0.0675 - val_mse: 0.0675 - val_mae: 0.2188 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0654 - val_loss: 0.0677 - val_mse: 0.0677 - val_mae: 0.2191 - lr: 1.0000e-05 - 242ms/epoch - 4ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0650 - val_loss: 0.0681 - val_mse: 0.0681 - val_mae: 0.2199 - lr: 1.0000e-05 - 253ms/epoch - 4ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0642 - val_loss: 0.0681 - val_mse: 0.0681 - val_mae: 0.2200 - lr: 1.0000e-05 - 238ms/epoch - 4ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0649 - val_loss: 0.0682 - val_mse: 0.0682 - val_mae: 0.2202 - lr: 1.0000e-05 - 233ms/epoch - 4ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0629 - val_loss: 0.0686 - val_mse: 0.0686 - val_mae: 0.2208 - lr: 1.0000e-05 - 234ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0628 - val_loss: 0.0683 - val_mse: 0.0683 - val_mae: 0.2203 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0611 - val_loss: 0.0683 - val_mse: 0.0683 - val_mae: 0.2202 - lr: 1.0000e-05 - 240ms/epoch - 4ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0612 - val_loss: 0.0686 - val_mse: 0.0686 - val_mae: 0.2208 - lr: 1.0000e-05 - 234ms/epoch - 4ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0605 - val_loss: 0.0690 - val_mse: 0.0690 - val_mae: 0.2216 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0630 - val_loss: 0.0694 - val_mse: 0.0694 - val_mae: 0.2222 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0651 - val_loss: 0.0692 - val_mse: 0.0692 - val_mae: 0.2219 - lr: 1.0000e-05 - 250ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0628 - val_loss: 0.0696 - val_mse: 0.0696 - val_mae: 0.2225 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0610 - val_loss: 0.0693 - val_mse: 0.0693 - val_mae: 0.2219 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0631 - val_loss: 0.0694 - val_mse: 0.0694 - val_mae: 0.2222 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0612 - val_loss: 0.0699 - val_mse: 0.0699 - val_mae: 0.2230 - lr: 1.0000e-05 - 243ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0616 - val_loss: 0.0702 - val_mse: 0.0702 - val_mae: 0.2236 - lr: 1.0000e-05 - 241ms/epoch - 4ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0612 - val_loss: 0.0703 - val_mse: 0.0703 - val_mae: 0.2239 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0595 - val_loss: 0.0711 - val_mse: 0.0711 - val_mae: 0.2252 - lr: 1.0000e-05 - 298ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0623 - val_loss: 0.0714 - val_mse: 0.0714 - val_mae: 0.2258 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0602 - val_loss: 0.0711 - val_mse: 0.0711 - val_mae: 0.2253 - lr: 1.0000e-05 - 249ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0628 - val_loss: 0.0710 - val_mse: 0.0710 - val_mae: 0.2249 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0628 - val_loss: 0.0709 - val_mse: 0.0709 - val_mae: 0.2248 - lr: 1.0000e-05 - 248ms/epoch - 4ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0594 - val_loss: 0.0713 - val_mse: 0.0713 - val_mae: 0.2255 - lr: 1.0000e-05 - 241ms/epoch - 4ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0577 - val_loss: 0.0715 - val_mse: 0.0715 - val_mae: 0.2259 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0607 - val_loss: 0.0709 - val_mse: 0.0709 - val_mae: 0.2248 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0596 - val_loss: 0.0705 - val_mse: 0.0705 - val_mae: 0.2238 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0590 - val_loss: 0.0707 - val_mse: 0.0707 - val_mae: 0.2243 - lr: 1.0000e-05 - 246ms/epoch - 4ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0611 - val_loss: 0.0708 - val_mse: 0.0708 - val_mae: 0.2244 - lr: 1.0000e-05 - 231ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0592 - val_loss: 0.0713 - val_mse: 0.0713 - val_mae: 0.2254 - lr: 1.0000e-05 - 244ms/epoch - 4ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0607 - val_loss: 0.0705 - val_mse: 0.0705 - val_mae: 0.2237 - lr: 1.0000e-05 - 244ms/epoch - 4ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0563 - val_loss: 0.0710 - val_mse: 0.0710 - val_mae: 0.2248 - lr: 1.0000e-05 - 224ms/epoch - 4ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0606 - val_loss: 0.0712 - val_mse: 0.0712 - val_mae: 0.2252 - lr: 1.0000e-05 - 226ms/epoch - 4ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0600 - val_loss: 0.0715 - val_mse: 0.0715 - val_mae: 0.2256 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0614 - val_loss: 0.0713 - val_mse: 0.0713 - val_mae: 0.2252 - lr: 1.0000e-05 - 255ms/epoch - 4ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0595 - val_loss: 0.0718 - val_mse: 0.0718 - val_mae: 0.2263 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0560 - val_loss: 0.0718 - val_mse: 0.0718 - val_mae: 0.2261 - lr: 1.0000e-05 - 238ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.03274
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0582 - val_loss: 0.0717 - val_mse: 0.0717 - val_mae: 0.2260 - lr: 1.0000e-05 - 241ms/epoch - 4ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 39.71707698339657 
RMSE:	 6.302148600548591 
MAPE:	 5.205453851116191

EMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 18.546398408821787 
RMSE:	 4.306552961339474 
MAPE:	 3.4160340524918316

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 100.42411916020163 
RMSE:	 10.021183520932126 
MAPE:	 8.070563561893302

DEMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 42.724210554814 
RMSE:	 6.536375949623308 
MAPE:	 5.453197818871469

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.65848136204503 
RMSE:	 4.863998495275778 
MAPE:	 3.972687129543795

MIDPOINT
Prediction vs Close:		45.9% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 21.13528196098577 
RMSE:	 4.5973124715409295 
MAPE:	 3.8079919461917573
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.37 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4414.515, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3944.062, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.35 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3715.173, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3577.471, Time=0.08 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.49 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.59 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3579.471, Time=0.20 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.249 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1784.736
Date:                Sun, 12 Dec 2021   AIC                           3577.471
Time:                        14:24:58   BIC                           3596.235
Sample:                             0   HQIC                          3584.677
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.844      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.861      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.862      0.000      -0.410      -0.387
sigma2         4.9242      0.023    215.469      0.000       4.879       4.969
===================================================================================
Ljung-Box (L1) (Q):                  14.55   Jarque-Bera (JB):           2468024.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       274.15
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03963, saving model to LSTM3.h5
43/43 - 2s - loss: 0.1598 - mse: 0.1598 - mae: 0.2724 - val_loss: 0.0396 - val_mse: 0.0396 - val_mae: 0.1735 - lr: 0.0010 - 2s/epoch - 55ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.03963 to 0.02142, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0428 - mse: 0.0428 - mae: 0.1692 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1156 - lr: 0.0010 - 231ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02142
43/43 - 0s - loss: 0.0214 - mse: 0.0214 - mae: 0.1165 - val_loss: 0.0221 - val_mse: 0.0221 - val_mae: 0.1191 - lr: 0.0010 - 211ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02142
43/43 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0849 - val_loss: 0.0245 - val_mse: 0.0245 - val_mae: 0.1290 - lr: 0.0010 - 226ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02142
43/43 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0807 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1309 - lr: 0.0010 - 186ms/epoch - 4ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.02142 to 0.02114, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0767 - val_loss: 0.0211 - val_mse: 0.0211 - val_mae: 0.1176 - lr: 0.0010 - 213ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0722 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1308 - lr: 0.0010 - 193ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0702 - val_loss: 0.0220 - val_mse: 0.0220 - val_mae: 0.1211 - lr: 0.0010 - 199ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0647 - val_loss: 0.0248 - val_mse: 0.0248 - val_mae: 0.1307 - lr: 0.0010 - 210ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0674 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1386 - lr: 0.0010 - 185ms/epoch - 4ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0665 - val_loss: 0.0265 - val_mse: 0.0265 - val_mae: 0.1360 - lr: 0.0010 - 182ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0627 - val_loss: 0.0224 - val_mse: 0.0224 - val_mae: 0.1218 - lr: 1.0000e-04 - 188ms/epoch - 4ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0595 - val_loss: 0.0220 - val_mse: 0.0220 - val_mae: 0.1201 - lr: 1.0000e-04 - 192ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0602 - val_loss: 0.0221 - val_mse: 0.0221 - val_mae: 0.1206 - lr: 1.0000e-04 - 189ms/epoch - 4ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0598 - val_loss: 0.0213 - val_mse: 0.0213 - val_mae: 0.1177 - lr: 1.0000e-04 - 188ms/epoch - 4ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0593 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1181 - lr: 1.0000e-04 - 241ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0581 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1181 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0584 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1181 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0570 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1179 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0573 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1180 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0575 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1180 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0580 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1181 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0558 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1182 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0580 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1181 - lr: 1.0000e-05 - 183ms/epoch - 4ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0557 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1184 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0601 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1187 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0573 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1187 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0592 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1187 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0566 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1192 - lr: 1.0000e-05 - 183ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0583 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1188 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0565 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1190 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0561 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1189 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0586 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1189 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0579 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1187 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0574 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1190 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0584 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1195 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0548 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1195 - lr: 1.0000e-05 - 238ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0585 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1198 - lr: 1.0000e-05 - 237ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0543 - val_loss: 0.0220 - val_mse: 0.0220 - val_mae: 0.1199 - lr: 1.0000e-05 - 177ms/epoch - 4ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0568 - val_loss: 0.0220 - val_mse: 0.0220 - val_mae: 0.1200 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0544 - val_loss: 0.0220 - val_mse: 0.0220 - val_mae: 0.1199 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0578 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1196 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0553 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1195 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0574 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1193 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0594 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1194 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0574 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1195 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0553 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1191 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0556 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1190 - lr: 1.0000e-05 - 188ms/epoch - 4ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0567 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1188 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0552 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1185 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0563 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1182 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0567 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1179 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0562 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1176 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1182 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0547 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1183 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.02114
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0560 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1186 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 39.71707698339657 
RMSE:	 6.302148600548591 
MAPE:	 5.205453851116191

EMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 18.546398408821787 
RMSE:	 4.306552961339474 
MAPE:	 3.4160340524918316

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 100.42411916020163 
RMSE:	 10.021183520932126 
MAPE:	 8.070563561893302

DEMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 42.724210554814 
RMSE:	 6.536375949623308 
MAPE:	 5.453197818871469

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.65848136204503 
RMSE:	 4.863998495275778 
MAPE:	 3.972687129543795

MIDPOINT
Prediction vs Close:		45.9% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 21.13528196098577 
RMSE:	 4.5973124715409295 
MAPE:	 3.8079919461917573

T3
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 101.89511494721017 
RMSE:	 10.09431101894578 
MAPE:	 8.111172771475383
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.49 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4352.703, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3889.412, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.26 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3689.930, Time=0.05 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3574.245, Time=0.08 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.21 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.83 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3576.245, Time=0.20 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.223 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1783.123
Date:                Sun, 12 Dec 2021   AIC                           3574.245
Time:                        14:26:26   BIC                           3593.008
Sample:                             0   HQIC                          3581.451
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1480      0.004   -302.430      0.000      -1.155      -1.141
ar.L2         -0.8300      0.008    -99.682      0.000      -0.846      -0.814
ar.L3         -0.3687      0.007    -50.527      0.000      -0.383      -0.354
sigma2         4.9055      0.028    175.970      0.000       4.851       4.960
===================================================================================
Ljung-Box (L1) (Q):                  11.61   Jarque-Bera (JB):           1261976.58
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.16   Skew:                             2.52
Prob(H) (two-sided):                  0.00   Kurtosis:                       196.90
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.16719, saving model to LSTM3.h5
90/90 - 3s - loss: 0.1229 - mse: 0.1229 - mae: 0.2524 - val_loss: 0.1672 - val_mse: 0.1672 - val_mae: 0.3380 - lr: 0.0010 - 3s/epoch - 34ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0169 - mse: 0.0169 - mae: 0.1022 - val_loss: 0.3016 - val_mse: 0.3016 - val_mae: 0.4812 - lr: 0.0010 - 352ms/epoch - 4ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0179 - mse: 0.0179 - mae: 0.1024 - val_loss: 0.2529 - val_mse: 0.2529 - val_mae: 0.4416 - lr: 0.0010 - 373ms/epoch - 4ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0882 - val_loss: 0.2506 - val_mse: 0.2506 - val_mae: 0.4436 - lr: 0.0010 - 373ms/epoch - 4ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0789 - val_loss: 0.2940 - val_mse: 0.2940 - val_mae: 0.4901 - lr: 0.0010 - 364ms/epoch - 4ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.16719
90/90 - 1s - loss: 0.0109 - mse: 0.0109 - mae: 0.0817 - val_loss: 0.2786 - val_mse: 0.2786 - val_mae: 0.4787 - lr: 0.0010 - 546ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0127 - mse: 0.0127 - mae: 0.0905 - val_loss: 0.2438 - val_mse: 0.2438 - val_mae: 0.4423 - lr: 1.0000e-04 - 369ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0676 - val_loss: 0.2522 - val_mse: 0.2522 - val_mae: 0.4509 - lr: 1.0000e-04 - 373ms/epoch - 4ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0668 - val_loss: 0.2652 - val_mse: 0.2652 - val_mae: 0.4639 - lr: 1.0000e-04 - 398ms/epoch - 4ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0658 - val_loss: 0.2694 - val_mse: 0.2694 - val_mae: 0.4680 - lr: 1.0000e-04 - 443ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0662 - val_loss: 0.2746 - val_mse: 0.2746 - val_mae: 0.4732 - lr: 1.0000e-04 - 403ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0642 - val_loss: 0.2746 - val_mse: 0.2746 - val_mae: 0.4732 - lr: 1.0000e-05 - 369ms/epoch - 4ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0642 - val_loss: 0.2742 - val_mse: 0.2742 - val_mae: 0.4728 - lr: 1.0000e-05 - 377ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0629 - val_loss: 0.2763 - val_mse: 0.2763 - val_mae: 0.4749 - lr: 1.0000e-05 - 376ms/epoch - 4ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0631 - val_loss: 0.2776 - val_mse: 0.2776 - val_mae: 0.4762 - lr: 1.0000e-05 - 359ms/epoch - 4ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0639 - val_loss: 0.2778 - val_mse: 0.2778 - val_mae: 0.4764 - lr: 1.0000e-05 - 491ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0621 - val_loss: 0.2802 - val_mse: 0.2802 - val_mae: 0.4787 - lr: 1.0000e-05 - 368ms/epoch - 4ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0622 - val_loss: 0.2810 - val_mse: 0.2810 - val_mae: 0.4795 - lr: 1.0000e-05 - 461ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0639 - val_loss: 0.2814 - val_mse: 0.2814 - val_mae: 0.4799 - lr: 1.0000e-05 - 366ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0631 - val_loss: 0.2829 - val_mse: 0.2829 - val_mae: 0.4814 - lr: 1.0000e-05 - 449ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0602 - val_loss: 0.2838 - val_mse: 0.2838 - val_mae: 0.4823 - lr: 1.0000e-05 - 416ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0626 - val_loss: 0.2836 - val_mse: 0.2836 - val_mae: 0.4821 - lr: 1.0000e-05 - 404ms/epoch - 4ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0613 - val_loss: 0.2830 - val_mse: 0.2830 - val_mae: 0.4815 - lr: 1.0000e-05 - 434ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0607 - val_loss: 0.2849 - val_mse: 0.2849 - val_mae: 0.4834 - lr: 1.0000e-05 - 389ms/epoch - 4ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0597 - val_loss: 0.2851 - val_mse: 0.2851 - val_mae: 0.4836 - lr: 1.0000e-05 - 363ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0610 - val_loss: 0.2849 - val_mse: 0.2849 - val_mae: 0.4835 - lr: 1.0000e-05 - 403ms/epoch - 4ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0614 - val_loss: 0.2842 - val_mse: 0.2842 - val_mae: 0.4828 - lr: 1.0000e-05 - 360ms/epoch - 4ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0640 - val_loss: 0.2825 - val_mse: 0.2825 - val_mae: 0.4812 - lr: 1.0000e-05 - 382ms/epoch - 4ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0627 - val_loss: 0.2836 - val_mse: 0.2836 - val_mae: 0.4822 - lr: 1.0000e-05 - 396ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0608 - val_loss: 0.2850 - val_mse: 0.2850 - val_mae: 0.4836 - lr: 1.0000e-05 - 381ms/epoch - 4ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0585 - val_loss: 0.2865 - val_mse: 0.2865 - val_mae: 0.4851 - lr: 1.0000e-05 - 471ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0605 - val_loss: 0.2867 - val_mse: 0.2867 - val_mae: 0.4853 - lr: 1.0000e-05 - 363ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0581 - val_loss: 0.2877 - val_mse: 0.2877 - val_mae: 0.4863 - lr: 1.0000e-05 - 417ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0572 - val_loss: 0.2879 - val_mse: 0.2879 - val_mae: 0.4865 - lr: 1.0000e-05 - 376ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0577 - val_loss: 0.2909 - val_mse: 0.2909 - val_mae: 0.4895 - lr: 1.0000e-05 - 369ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0579 - val_loss: 0.2901 - val_mse: 0.2901 - val_mae: 0.4887 - lr: 1.0000e-05 - 370ms/epoch - 4ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0596 - val_loss: 0.2905 - val_mse: 0.2905 - val_mae: 0.4891 - lr: 1.0000e-05 - 369ms/epoch - 4ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0611 - val_loss: 0.2903 - val_mse: 0.2903 - val_mae: 0.4889 - lr: 1.0000e-05 - 383ms/epoch - 4ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0583 - val_loss: 0.2881 - val_mse: 0.2881 - val_mae: 0.4868 - lr: 1.0000e-05 - 390ms/epoch - 4ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0586 - val_loss: 0.2886 - val_mse: 0.2886 - val_mae: 0.4873 - lr: 1.0000e-05 - 372ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0569 - val_loss: 0.2896 - val_mse: 0.2896 - val_mae: 0.4883 - lr: 1.0000e-05 - 380ms/epoch - 4ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0576 - val_loss: 0.2900 - val_mse: 0.2900 - val_mae: 0.4887 - lr: 1.0000e-05 - 378ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0565 - val_loss: 0.2929 - val_mse: 0.2929 - val_mae: 0.4914 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0568 - val_loss: 0.2931 - val_mse: 0.2931 - val_mae: 0.4917 - lr: 1.0000e-05 - 456ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0571 - val_loss: 0.2935 - val_mse: 0.2935 - val_mae: 0.4920 - lr: 1.0000e-05 - 381ms/epoch - 4ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0584 - val_loss: 0.2950 - val_mse: 0.2950 - val_mae: 0.4936 - lr: 1.0000e-05 - 367ms/epoch - 4ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0581 - val_loss: 0.2982 - val_mse: 0.2982 - val_mae: 0.4966 - lr: 1.0000e-05 - 372ms/epoch - 4ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0570 - val_loss: 0.2977 - val_mse: 0.2977 - val_mae: 0.4961 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0563 - val_loss: 0.2970 - val_mse: 0.2970 - val_mae: 0.4956 - lr: 1.0000e-05 - 371ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0571 - val_loss: 0.2977 - val_mse: 0.2977 - val_mae: 0.4963 - lr: 1.0000e-05 - 380ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.16719
90/90 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0584 - val_loss: 0.2998 - val_mse: 0.2998 - val_mae: 0.4983 - lr: 1.0000e-05 - 402ms/epoch - 4ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 39.71707698339657 
RMSE:	 6.302148600548591 
MAPE:	 5.205453851116191

EMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 18.546398408821787 
RMSE:	 4.306552961339474 
MAPE:	 3.4160340524918316

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 100.42411916020163 
RMSE:	 10.021183520932126 
MAPE:	 8.070563561893302

DEMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 42.724210554814 
RMSE:	 6.536375949623308 
MAPE:	 5.453197818871469

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.65848136204503 
RMSE:	 4.863998495275778 
MAPE:	 3.972687129543795

MIDPOINT
Prediction vs Close:		45.9% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 21.13528196098577 
RMSE:	 4.5973124715409295 
MAPE:	 3.8079919461917573

T3
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 101.89511494721017 
RMSE:	 10.09431101894578 
MAPE:	 8.111172771475383

TEMA
Prediction vs Close:		49.25% Accuracy
Prediction vs Prediction:	49.25% Accuracy
MSE:	 10.999022660597852 
RMSE:	 3.316477447623887 
MAPE:	 2.655228978622383
Runtime: mins: 11.426148450466659

Architecture Used

In [103]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
In [104]:
img = cv2.imread('Experiment3.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[104]:
<matplotlib.image.AxesImage at 0x7f75dc03b110>

Model Plots

In [101]:
with open('simulation3_data.json') as json_file:
    simulation3 = json.load(json_file)
fileimg = 'Experiment3'
In [102]:
for i in range(len(list(simulation3.keys()))):
  SIM = list(simulation3.keys())[i]
  plot_train(simulation3,SIM)
  plot_test(simulation3,SIM)
----- Train RMSE for SMA ----- 9.349037175519484
----- Train_MSE_LSTM for SMA ----- 87.40449610924534
----- Train MAE LSTM for SMA ----- 8.179049661388795
----- Test RMSE for SMA----- 6.302148600548591
----- Test_MSE_LSTM for SMA----- 39.71707698339657
----- Test_MAE_LSTM for SMA----- 5.205453851116191
----- Train RMSE for EMA ----- 10.408056772244768
----- Train_MSE_LSTM for EMA ----- 108.32764577427017
----- Train MAE LSTM for EMA ----- 9.272168477176514
----- Test RMSE for EMA----- 4.306552961339474
----- Test_MSE_LSTM for EMA----- 18.546398408821787
----- Test_MAE_LSTM for EMA----- 3.4160340524918316
----- Train RMSE for WMA ----- 10.758673148854621
----- Train_MSE_LSTM for WMA ----- 115.7490479238854
----- Train MAE LSTM for WMA ----- 9.635844746281826
----- Test RMSE for WMA----- 10.021183520932126
----- Test_MSE_LSTM for WMA----- 100.42411916020163
----- Test_MAE_LSTM for WMA----- 8.070563561893302
----- Train RMSE for DEMA ----- 12.471033474287747
----- Train_MSE_LSTM for DEMA ----- 155.52667591680552
----- Train MAE LSTM for DEMA ----- 11.27911882792913
----- Test RMSE for DEMA----- 6.536375949623308
----- Test_MSE_LSTM for DEMA----- 42.724210554814
----- Test_MAE_LSTM for DEMA----- 5.453197818871469
----- Train RMSE for KAMA ----- 11.226948346371925
----- Train_MSE_LSTM for KAMA ----- 126.0443691721033
----- Train MAE LSTM for KAMA ----- 10.243495551915379
----- Test RMSE for KAMA----- 4.863998495275778
----- Test_MSE_LSTM for KAMA----- 23.65848136204503
----- Test_MAE_LSTM for KAMA----- 3.972687129543795
----- Train RMSE for MIDPOINT ----- 9.85503947109961
----- Train_MSE_LSTM for MIDPOINT ----- 97.1218029769313
----- Train MAE LSTM for MIDPOINT ----- 8.735687748390125
----- Test RMSE for MIDPOINT----- 4.5973124715409295
----- Test_MSE_LSTM for MIDPOINT----- 21.13528196098577
----- Test_MAE_LSTM for MIDPOINT----- 3.8079919461917573
----- Train RMSE for T3 ----- 12.321219257781456
----- Train_MSE_LSTM for T3 ----- 151.8124439983246
----- Train MAE LSTM for T3 ----- 11.208239243433828
----- Test RMSE for T3----- 10.09431101894578
----- Test_MSE_LSTM for T3----- 101.89511494721017
----- Test_MAE_LSTM for T3----- 8.111172771475383
----- Train RMSE for TEMA ----- 7.47666641398143
----- Train_MSE_LSTM for TEMA ----- 55.900540665957934
----- Train MAE LSTM for TEMA ----- 5.218204706016292
----- Test RMSE for TEMA----- 3.316477447623887
----- Test_MSE_LSTM for TEMA----- 10.999022660597852
----- Test_MAE_LSTM for TEMA----- 2.655228978622383

Univariate Arima Multistep MutiVariate LSTM Hybrid Model Experiment 4

From the above experiments it is evident that with Higher moving averages the loss plots show unreoresented data and underfitting, hence keeping only the MA's that have smaller periods like T3 OR TRIMA. Going forward EMA, WMA & DEMA will be ignored.

In [106]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # # Option 1
    # # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()




    # # Option 3
    # # define custom activation
    # # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    model = Sequential()
    model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
    model.add(LSTM(units=int(lstm_len/2)))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='mean_squared_error', optimizer='adam')
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM4.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = (y_scaler.inverse_transform(predictiontr)-det).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte =( y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [107]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation4 = {}
    imgfile = 'Experiment4'
    for ma in optimized_period:
              print(ma)
              print(functions[ma])
              print ( int( optimized_period[ma]))
            # if ma == 'SMA':
              low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
              low_vol = low_vol.fillna(0)
              low_vol_data = df['close']
              high_vol = pd.DataFrame()
              df2 = df.copy()
              for i in df2.columns:
                if i in low_vol.columns:
                  high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
              high_vol_data = df['close']
              ## *****************************************************
              # Generate ARIMA and LSTM predictions
              print('\nWorking on ' + ma + ' predictions')
              try:
                print('parameters used : ', train_len, test_len)
                low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
              except:
                  print('ARIMA error, skipping to next MA type')
                  continue
              Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
              final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
              mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
              rmse_ftr = mse_ftr ** 0.5
              mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
              mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

              final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
              mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
              rmse = mse ** 0.5
              mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              # Generate prediction accuracy
              actual = df['close'].tail(test_len).values
              result_1 = []
              result_2 = []
              for i in range(1, len(final_prediction)):
                  # Compare prediction to previous close price
                  if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                      result_1.append(1)
                  elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                      result_1.append(1)
                  else:
                      result_1.append(0)

                  # Compare prediction to previous prediction
                  if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                      result_2.append(1)
                  elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                      result_2.append(1)
                  else:
                      result_2.append(0)

              accuracy_1 = np.mean(result_1)
              accuracy_2 = np.mean(result_2)

              simulation4[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                            'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                            'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                            'rmse': rmse_ftr, 'mae' : mae_ftr},
                                'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                          'rmse': rmse, 'mae': mae },
                                'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

              # save simulation data here as checkpoint
              with open('simulation4_data.json', 'w') as fp:
                  json.dump(simulation4, fp)

              for ma in simulation4.keys():
                  print('\n' + ma)
                  print('Prediction vs Close:\t\t' + str(round(100*simulation4[ma]['accuracy']['prediction vs close'], 2))
                        + '% Accuracy')
                  print('Prediction vs Prediction:\t' + str(round(100*simulation4[ma]['accuracy']['prediction vs prediction'], 2))
                        + '% Accuracy')
                  print('MSE:\t', simulation4[ma]['final']['mse'],
                        '\nRMSE:\t', simulation4[ma]['final']['rmse'],
                        '\nMAPE:\t', simulation4[ma]['final']['mae'])#,
                        # '\nMAPE:\t', simulation[ma]['final']['mape'])
            # else:
            #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.45 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4157.020, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3687.148, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.21 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3458.651, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3322.133, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.75 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.80 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3324.133, Time=0.20 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.697 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1657.067
Date:                Sun, 12 Dec 2021   AIC                           3322.133
Time:                        14:39:08   BIC                           3340.897
Sample:                             0   HQIC                          3329.339
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1966      0.003   -387.226      0.000      -1.203      -1.191
ar.L2         -0.8952      0.006   -138.692      0.000      -0.908      -0.883
ar.L3         -0.3968      0.006    -68.284      0.000      -0.408      -0.385
sigma2         3.5858      0.017    214.535      0.000       3.553       3.619
===================================================================================
Ljung-Box (L1) (Q):                  14.47   Jarque-Bera (JB):           2428881.42
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       271.99
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05005, saving model to LSTM4.h5
48/48 - 4s - loss: 1.3045 - val_loss: 0.0501 - lr: 0.0010 - 4s/epoch - 80ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05005
48/48 - 0s - loss: 1.2224 - val_loss: 0.0536 - lr: 0.0010 - 281ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05005
48/48 - 0s - loss: 1.1389 - val_loss: 0.0565 - lr: 0.0010 - 243ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05005
48/48 - 0s - loss: 1.0538 - val_loss: 0.0591 - lr: 0.0010 - 243ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.9757 - val_loss: 0.0613 - lr: 0.0010 - 261ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.9088 - val_loss: 0.0639 - lr: 0.0010 - 283ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8728 - val_loss: 0.0642 - lr: 1.0000e-04 - 269ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8678 - val_loss: 0.0645 - lr: 1.0000e-04 - 292ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8628 - val_loss: 0.0649 - lr: 1.0000e-04 - 278ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8579 - val_loss: 0.0653 - lr: 1.0000e-04 - 275ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8531 - val_loss: 0.0656 - lr: 1.0000e-04 - 246ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8501 - val_loss: 0.0657 - lr: 1.0000e-05 - 247ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8496 - val_loss: 0.0657 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8492 - val_loss: 0.0658 - lr: 1.0000e-05 - 290ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8487 - val_loss: 0.0658 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8482 - val_loss: 0.0659 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8477 - val_loss: 0.0659 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8472 - val_loss: 0.0659 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8467 - val_loss: 0.0660 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8462 - val_loss: 0.0660 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8457 - val_loss: 0.0661 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8452 - val_loss: 0.0661 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8447 - val_loss: 0.0662 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8442 - val_loss: 0.0662 - lr: 1.0000e-05 - 251ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8437 - val_loss: 0.0663 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8432 - val_loss: 0.0663 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8427 - val_loss: 0.0664 - lr: 1.0000e-05 - 249ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8422 - val_loss: 0.0664 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8417 - val_loss: 0.0665 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8412 - val_loss: 0.0665 - lr: 1.0000e-05 - 248ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8407 - val_loss: 0.0666 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8402 - val_loss: 0.0666 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8397 - val_loss: 0.0667 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8391 - val_loss: 0.0667 - lr: 1.0000e-05 - 296ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8386 - val_loss: 0.0668 - lr: 1.0000e-05 - 252ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8381 - val_loss: 0.0668 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8376 - val_loss: 0.0669 - lr: 1.0000e-05 - 247ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8371 - val_loss: 0.0669 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8366 - val_loss: 0.0670 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8361 - val_loss: 0.0670 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8356 - val_loss: 0.0671 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8351 - val_loss: 0.0671 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8346 - val_loss: 0.0672 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8341 - val_loss: 0.0672 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8336 - val_loss: 0.0673 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8330 - val_loss: 0.0673 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8325 - val_loss: 0.0674 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8320 - val_loss: 0.0674 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8315 - val_loss: 0.0675 - lr: 1.0000e-05 - 326ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8310 - val_loss: 0.0675 - lr: 1.0000e-05 - 310ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05005
48/48 - 0s - loss: 0.8305 - val_loss: 0.0676 - lr: 1.0000e-05 - 294ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	51.87% Accuracy
MSE:	 25.081351909450348 
RMSE:	 5.008128583557969 
MAPE:	 3.9377037384058786
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.43 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4231.556, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3761.238, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.27 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3532.227, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3394.496, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.90 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.69 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3396.496, Time=0.20 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.734 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1693.248
Date:                Sun, 12 Dec 2021   AIC                           3394.496
Time:                        14:40:42   BIC                           3413.260
Sample:                             0   HQIC                          3401.702
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.569      0.000      -1.204      -1.192
ar.L2         -0.8976      0.006   -139.811      0.000      -0.910      -0.885
ar.L3         -0.3984      0.006    -68.662      0.000      -0.410      -0.387
sigma2         3.9230      0.018    215.372      0.000       3.887       3.959
===================================================================================
Ljung-Box (L1) (Q):                  14.54   Jarque-Bera (JB):           2462173.05
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.82
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04901, saving model to LSTM4.h5
16/16 - 4s - loss: 1.4775 - val_loss: 0.0490 - lr: 0.0010 - 4s/epoch - 247ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.04901 to 0.04888, saving model to LSTM4.h5
16/16 - 0s - loss: 1.4455 - val_loss: 0.0489 - lr: 0.0010 - 118ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04888 to 0.04882, saving model to LSTM4.h5
16/16 - 0s - loss: 1.4175 - val_loss: 0.0488 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.3914 - val_loss: 0.0489 - lr: 0.0010 - 103ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.3659 - val_loss: 0.0492 - lr: 0.0010 - 114ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.3397 - val_loss: 0.0496 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.3125 - val_loss: 0.0501 - lr: 0.0010 - 95ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2943 - val_loss: 0.0502 - lr: 1.0000e-04 - 108ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2915 - val_loss: 0.0503 - lr: 1.0000e-04 - 95ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2889 - val_loss: 0.0503 - lr: 1.0000e-04 - 107ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2862 - val_loss: 0.0504 - lr: 1.0000e-04 - 96ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2836 - val_loss: 0.0505 - lr: 1.0000e-04 - 93ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2819 - val_loss: 0.0505 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2816 - val_loss: 0.0505 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2814 - val_loss: 0.0505 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2811 - val_loss: 0.0505 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2809 - val_loss: 0.0505 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2806 - val_loss: 0.0505 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2804 - val_loss: 0.0505 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2801 - val_loss: 0.0505 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2799 - val_loss: 0.0505 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2796 - val_loss: 0.0505 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2794 - val_loss: 0.0506 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2791 - val_loss: 0.0506 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2789 - val_loss: 0.0506 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2786 - val_loss: 0.0506 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2784 - val_loss: 0.0506 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2781 - val_loss: 0.0506 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2779 - val_loss: 0.0506 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2776 - val_loss: 0.0506 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2774 - val_loss: 0.0506 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2772 - val_loss: 0.0506 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2769 - val_loss: 0.0506 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2767 - val_loss: 0.0506 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2764 - val_loss: 0.0506 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2762 - val_loss: 0.0507 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2759 - val_loss: 0.0507 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2757 - val_loss: 0.0507 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2754 - val_loss: 0.0507 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2752 - val_loss: 0.0507 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2749 - val_loss: 0.0507 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2747 - val_loss: 0.0507 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2744 - val_loss: 0.0507 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2742 - val_loss: 0.0507 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2740 - val_loss: 0.0507 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2737 - val_loss: 0.0507 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2735 - val_loss: 0.0507 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2732 - val_loss: 0.0507 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2730 - val_loss: 0.0508 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2727 - val_loss: 0.0508 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2725 - val_loss: 0.0508 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2722 - val_loss: 0.0508 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.04882
16/16 - 0s - loss: 1.2720 - val_loss: 0.0508 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 00053: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	51.87% Accuracy
MSE:	 25.081351909450348 
RMSE:	 5.008128583557969 
MAPE:	 3.9377037384058786

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 37.86506779592453 
RMSE:	 6.153459823215273 
MAPE:	 4.830217084369749
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.43 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4264.089, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3793.930, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.26 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3564.923, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3427.258, Time=0.08 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.25 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.45 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3429.258, Time=0.19 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.799 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1709.629
Date:                Sun, 12 Dec 2021   AIC                           3427.258
Time:                        14:42:01   BIC                           3446.021
Sample:                             0   HQIC                          3434.464
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1981      0.003   -389.386      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.699      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.737      0.000      -0.410      -0.387
sigma2         4.0860      0.019    215.311      0.000       4.049       4.123
===================================================================================
Ljung-Box (L1) (Q):                  14.57   Jarque-Bera (JB):           2460901.70
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05020, saving model to LSTM4.h5
17/17 - 4s - loss: 1.4028 - val_loss: 0.0502 - lr: 0.0010 - 4s/epoch - 211ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.05020 to 0.04800, saving model to LSTM4.h5
17/17 - 0s - loss: 1.3285 - val_loss: 0.0480 - lr: 0.0010 - 121ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04800 to 0.04527, saving model to LSTM4.h5
17/17 - 0s - loss: 1.2689 - val_loss: 0.0453 - lr: 0.0010 - 122ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.04527 to 0.04270, saving model to LSTM4.h5
17/17 - 0s - loss: 1.2122 - val_loss: 0.0427 - lr: 0.0010 - 141ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.04270 to 0.04103, saving model to LSTM4.h5
17/17 - 0s - loss: 1.1524 - val_loss: 0.0410 - lr: 0.0010 - 133ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.04103 to 0.04060, saving model to LSTM4.h5
17/17 - 0s - loss: 1.0902 - val_loss: 0.0406 - lr: 0.0010 - 138ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04060
17/17 - 0s - loss: 1.0302 - val_loss: 0.0412 - lr: 0.0010 - 124ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.9764 - val_loss: 0.0424 - lr: 0.0010 - 121ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.9295 - val_loss: 0.0439 - lr: 0.0010 - 108ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8887 - val_loss: 0.0454 - lr: 0.0010 - 120ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8535 - val_loss: 0.0468 - lr: 0.0010 - 106ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8328 - val_loss: 0.0469 - lr: 1.0000e-04 - 109ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8300 - val_loss: 0.0471 - lr: 1.0000e-04 - 105ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8273 - val_loss: 0.0472 - lr: 1.0000e-04 - 92ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8247 - val_loss: 0.0474 - lr: 1.0000e-04 - 109ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8221 - val_loss: 0.0475 - lr: 1.0000e-04 - 105ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8204 - val_loss: 0.0475 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8201 - val_loss: 0.0475 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8199 - val_loss: 0.0476 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8196 - val_loss: 0.0476 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8193 - val_loss: 0.0476 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8191 - val_loss: 0.0476 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8188 - val_loss: 0.0476 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8186 - val_loss: 0.0477 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8183 - val_loss: 0.0477 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8180 - val_loss: 0.0477 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8178 - val_loss: 0.0477 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8175 - val_loss: 0.0477 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8173 - val_loss: 0.0477 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8170 - val_loss: 0.0478 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8167 - val_loss: 0.0478 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8165 - val_loss: 0.0478 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8162 - val_loss: 0.0478 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8159 - val_loss: 0.0478 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8157 - val_loss: 0.0479 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8154 - val_loss: 0.0479 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8151 - val_loss: 0.0479 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8149 - val_loss: 0.0479 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8146 - val_loss: 0.0480 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8143 - val_loss: 0.0480 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8141 - val_loss: 0.0480 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8138 - val_loss: 0.0480 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8135 - val_loss: 0.0480 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8133 - val_loss: 0.0481 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8130 - val_loss: 0.0481 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8127 - val_loss: 0.0481 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8125 - val_loss: 0.0481 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8122 - val_loss: 0.0482 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8119 - val_loss: 0.0482 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8117 - val_loss: 0.0482 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8114 - val_loss: 0.0482 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8111 - val_loss: 0.0482 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8108 - val_loss: 0.0483 - lr: 1.0000e-05 - 122ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8106 - val_loss: 0.0483 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8103 - val_loss: 0.0483 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.04060
17/17 - 0s - loss: 0.8100 - val_loss: 0.0483 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	51.87% Accuracy
MSE:	 25.081351909450348 
RMSE:	 5.008128583557969 
MAPE:	 3.9377037384058786

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 37.86506779592453 
RMSE:	 6.153459823215273 
MAPE:	 4.830217084369749

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.37812690087662 
RMSE:	 7.306033595657539 
MAPE:	 5.95316326588041
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.42 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4436.126, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3965.317, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.40 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3736.589, Time=0.06 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3598.951, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.00 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.98 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3600.951, Time=0.17 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.209 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1795.475
Date:                Sun, 12 Dec 2021   AIC                           3598.951
Time:                        14:43:22   BIC                           3617.714
Sample:                             0   HQIC                          3606.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1983      0.003   -389.581      0.000      -1.204      -1.192
ar.L2         -0.8973      0.006   -139.732      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.649      0.000      -0.410      -0.387
sigma2         5.0573      0.023    215.292      0.000       5.011       5.103
===================================================================================
Ljung-Box (L1) (Q):                  14.41   Jarque-Bera (JB):           2460553.80
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.89
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05015, saving model to LSTM4.h5
10/10 - 4s - loss: 1.4573 - val_loss: 0.0502 - lr: 0.0010 - 4s/epoch - 359ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.05015 to 0.05012, saving model to LSTM4.h5
10/10 - 0s - loss: 1.4383 - val_loss: 0.0501 - lr: 0.0010 - 105ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.05012 to 0.05001, saving model to LSTM4.h5
10/10 - 0s - loss: 1.4219 - val_loss: 0.0500 - lr: 0.0010 - 93ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.05001 to 0.04985, saving model to LSTM4.h5
10/10 - 0s - loss: 1.4070 - val_loss: 0.0499 - lr: 0.0010 - 83ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.04985 to 0.04968, saving model to LSTM4.h5
10/10 - 0s - loss: 1.3928 - val_loss: 0.0497 - lr: 0.0010 - 79ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.04968 to 0.04952, saving model to LSTM4.h5
10/10 - 0s - loss: 1.3788 - val_loss: 0.0495 - lr: 0.0010 - 96ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.04952 to 0.04937, saving model to LSTM4.h5
10/10 - 0s - loss: 1.3643 - val_loss: 0.0494 - lr: 0.0010 - 106ms/epoch - 11ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.04937 to 0.04925, saving model to LSTM4.h5
10/10 - 0s - loss: 1.3487 - val_loss: 0.0492 - lr: 0.0010 - 97ms/epoch - 10ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.04925 to 0.04916, saving model to LSTM4.h5
10/10 - 0s - loss: 1.3317 - val_loss: 0.0492 - lr: 0.0010 - 101ms/epoch - 10ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.04916 to 0.04913, saving model to LSTM4.h5
10/10 - 0s - loss: 1.3129 - val_loss: 0.0491 - lr: 0.0010 - 94ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.2920 - val_loss: 0.0492 - lr: 0.0010 - 81ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.2690 - val_loss: 0.0493 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.2440 - val_loss: 0.0495 - lr: 0.0010 - 65ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.2171 - val_loss: 0.0498 - lr: 0.0010 - 78ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00015: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1885 - val_loss: 0.0504 - lr: 0.0010 - 63ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1676 - val_loss: 0.0504 - lr: 1.0000e-04 - 68ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1646 - val_loss: 0.0505 - lr: 1.0000e-04 - 65ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1617 - val_loss: 0.0506 - lr: 1.0000e-04 - 68ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1589 - val_loss: 0.0507 - lr: 1.0000e-04 - 75ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00020: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1562 - val_loss: 0.0507 - lr: 1.0000e-04 - 83ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1543 - val_loss: 0.0507 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1540 - val_loss: 0.0508 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1537 - val_loss: 0.0508 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1535 - val_loss: 0.0508 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00025: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1532 - val_loss: 0.0508 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1530 - val_loss: 0.0508 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1527 - val_loss: 0.0508 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1525 - val_loss: 0.0508 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1522 - val_loss: 0.0508 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1520 - val_loss: 0.0508 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1517 - val_loss: 0.0508 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1515 - val_loss: 0.0508 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1512 - val_loss: 0.0509 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1510 - val_loss: 0.0509 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1507 - val_loss: 0.0509 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1505 - val_loss: 0.0509 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1503 - val_loss: 0.0509 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1500 - val_loss: 0.0509 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1498 - val_loss: 0.0509 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1495 - val_loss: 0.0509 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1493 - val_loss: 0.0509 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1491 - val_loss: 0.0509 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1488 - val_loss: 0.0509 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1486 - val_loss: 0.0510 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1484 - val_loss: 0.0510 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1481 - val_loss: 0.0510 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1479 - val_loss: 0.0510 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1476 - val_loss: 0.0510 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1474 - val_loss: 0.0510 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1472 - val_loss: 0.0510 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1469 - val_loss: 0.0510 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1467 - val_loss: 0.0510 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1465 - val_loss: 0.0510 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1462 - val_loss: 0.0511 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1460 - val_loss: 0.0511 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1458 - val_loss: 0.0511 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1455 - val_loss: 0.0511 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1453 - val_loss: 0.0511 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1451 - val_loss: 0.0511 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.04913
10/10 - 0s - loss: 1.1448 - val_loss: 0.0511 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 00060: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	51.87% Accuracy
MSE:	 25.081351909450348 
RMSE:	 5.008128583557969 
MAPE:	 3.9377037384058786

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 37.86506779592453 
RMSE:	 6.153459823215273 
MAPE:	 4.830217084369749

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.37812690087662 
RMSE:	 7.306033595657539 
MAPE:	 5.95316326588041

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 133.19678136227444 
RMSE:	 11.541090995320783 
MAPE:	 10.29859546777107
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.37 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4190.464, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3724.371, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.28 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3494.154, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3357.435, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.21 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.76 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3359.435, Time=0.21 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.077 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1674.717
Date:                Sun, 12 Dec 2021   AIC                           3357.435
Time:                        14:44:41   BIC                           3376.198
Sample:                             0   HQIC                          3364.641
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1955      0.003   -381.246      0.000      -1.202      -1.189
ar.L2         -0.8964      0.007   -135.835      0.000      -0.909      -0.883
ar.L3         -0.3971      0.006    -67.229      0.000      -0.409      -0.385
sigma2         3.7466      0.018    211.623      0.000       3.712       3.781
===================================================================================
Ljung-Box (L1) (Q):                  14.20   Jarque-Bera (JB):           2338363.32
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             3.76
Prob(H) (two-sided):                  0.00   Kurtosis:                       266.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05434, saving model to LSTM4.h5
45/45 - 4s - loss: 1.4199 - val_loss: 0.0543 - lr: 0.0010 - 4s/epoch - 97ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05434
45/45 - 0s - loss: 1.3785 - val_loss: 0.0549 - lr: 0.0010 - 231ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.05434 to 0.05416, saving model to LSTM4.h5
45/45 - 0s - loss: 1.3278 - val_loss: 0.0542 - lr: 0.0010 - 270ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05416
45/45 - 0s - loss: 1.2376 - val_loss: 0.0562 - lr: 0.0010 - 237ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05416
45/45 - 0s - loss: 1.1316 - val_loss: 0.0603 - lr: 0.0010 - 252ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.05416
45/45 - 0s - loss: 1.0605 - val_loss: 0.0647 - lr: 0.0010 - 239ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05416
45/45 - 0s - loss: 1.0065 - val_loss: 0.0694 - lr: 0.0010 - 267ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9623 - val_loss: 0.0744 - lr: 0.0010 - 249ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9381 - val_loss: 0.0749 - lr: 1.0000e-04 - 232ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9346 - val_loss: 0.0754 - lr: 1.0000e-04 - 268ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9311 - val_loss: 0.0759 - lr: 1.0000e-04 - 258ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9276 - val_loss: 0.0765 - lr: 1.0000e-04 - 230ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9242 - val_loss: 0.0770 - lr: 1.0000e-04 - 230ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9221 - val_loss: 0.0771 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9218 - val_loss: 0.0771 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9215 - val_loss: 0.0772 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9211 - val_loss: 0.0772 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9208 - val_loss: 0.0773 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9204 - val_loss: 0.0774 - lr: 1.0000e-05 - 308ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9201 - val_loss: 0.0774 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9197 - val_loss: 0.0775 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9194 - val_loss: 0.0776 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9191 - val_loss: 0.0776 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9187 - val_loss: 0.0777 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9184 - val_loss: 0.0778 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9180 - val_loss: 0.0778 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9177 - val_loss: 0.0779 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9173 - val_loss: 0.0780 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9170 - val_loss: 0.0780 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9166 - val_loss: 0.0781 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9163 - val_loss: 0.0782 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9159 - val_loss: 0.0783 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9156 - val_loss: 0.0783 - lr: 1.0000e-05 - 303ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9152 - val_loss: 0.0784 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9149 - val_loss: 0.0785 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9145 - val_loss: 0.0786 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9142 - val_loss: 0.0786 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9138 - val_loss: 0.0787 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9135 - val_loss: 0.0788 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9131 - val_loss: 0.0789 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9128 - val_loss: 0.0790 - lr: 1.0000e-05 - 282ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9124 - val_loss: 0.0790 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9121 - val_loss: 0.0791 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9117 - val_loss: 0.0792 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9114 - val_loss: 0.0793 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9110 - val_loss: 0.0794 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9107 - val_loss: 0.0794 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9103 - val_loss: 0.0795 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9100 - val_loss: 0.0796 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9096 - val_loss: 0.0797 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9093 - val_loss: 0.0798 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9089 - val_loss: 0.0799 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.05416
45/45 - 0s - loss: 0.9086 - val_loss: 0.0800 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 00053: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	51.87% Accuracy
MSE:	 25.081351909450348 
RMSE:	 5.008128583557969 
MAPE:	 3.9377037384058786

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 37.86506779592453 
RMSE:	 6.153459823215273 
MAPE:	 4.830217084369749

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.37812690087662 
RMSE:	 7.306033595657539 
MAPE:	 5.95316326588041

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 133.19678136227444 
RMSE:	 11.541090995320783 
MAPE:	 10.29859546777107

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	50.37% Accuracy
MSE:	 20.693935177088164 
RMSE:	 4.549058713304123 
MAPE:	 3.6577262429810227
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.36 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4212.289, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3747.746, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.24 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3523.401, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3387.759, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.31 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.93 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3389.758, Time=0.22 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.314 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1689.879
Date:                Sun, 12 Dec 2021   AIC                           3387.759
Time:                        14:46:10   BIC                           3406.522
Sample:                             0   HQIC                          3394.964
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1878      0.003   -345.315      0.000      -1.195      -1.181
ar.L2         -0.8876      0.007   -121.809      0.000      -0.902      -0.873
ar.L3         -0.3957      0.007    -60.127      0.000      -0.409      -0.383
sigma2         3.8904      0.020    193.404      0.000       3.851       3.930
===================================================================================
Ljung-Box (L1) (Q):                  13.21   Jarque-Bera (JB):           1659080.01
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.08   Skew:                             3.28
Prob(H) (two-sided):                  0.00   Kurtosis:                       225.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05739, saving model to LSTM4.h5
58/58 - 4s - loss: 1.3554 - val_loss: 0.0574 - lr: 0.0010 - 4s/epoch - 73ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.05739 to 0.05277, saving model to LSTM4.h5
58/58 - 0s - loss: 1.1751 - val_loss: 0.0528 - lr: 0.0010 - 331ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05277
58/58 - 0s - loss: 1.0556 - val_loss: 0.0541 - lr: 0.0010 - 329ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.9570 - val_loss: 0.0582 - lr: 0.0010 - 287ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.8763 - val_loss: 0.0633 - lr: 0.0010 - 291ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.8146 - val_loss: 0.0687 - lr: 0.0010 - 305ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7700 - val_loss: 0.0739 - lr: 0.0010 - 303ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7483 - val_loss: 0.0744 - lr: 1.0000e-04 - 297ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7453 - val_loss: 0.0750 - lr: 1.0000e-04 - 326ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7422 - val_loss: 0.0756 - lr: 1.0000e-04 - 293ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7392 - val_loss: 0.0762 - lr: 1.0000e-04 - 289ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7363 - val_loss: 0.0768 - lr: 1.0000e-04 - 314ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7344 - val_loss: 0.0768 - lr: 1.0000e-05 - 300ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7341 - val_loss: 0.0769 - lr: 1.0000e-05 - 312ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7338 - val_loss: 0.0770 - lr: 1.0000e-05 - 331ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7335 - val_loss: 0.0770 - lr: 1.0000e-05 - 331ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7332 - val_loss: 0.0771 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7329 - val_loss: 0.0772 - lr: 1.0000e-05 - 300ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7326 - val_loss: 0.0773 - lr: 1.0000e-05 - 306ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7323 - val_loss: 0.0773 - lr: 1.0000e-05 - 298ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7320 - val_loss: 0.0774 - lr: 1.0000e-05 - 364ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7316 - val_loss: 0.0775 - lr: 1.0000e-05 - 358ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7313 - val_loss: 0.0776 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7310 - val_loss: 0.0777 - lr: 1.0000e-05 - 295ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7306 - val_loss: 0.0778 - lr: 1.0000e-05 - 324ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7303 - val_loss: 0.0778 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7300 - val_loss: 0.0779 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7296 - val_loss: 0.0780 - lr: 1.0000e-05 - 346ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7293 - val_loss: 0.0781 - lr: 1.0000e-05 - 331ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7290 - val_loss: 0.0782 - lr: 1.0000e-05 - 315ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7286 - val_loss: 0.0783 - lr: 1.0000e-05 - 335ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7283 - val_loss: 0.0784 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7279 - val_loss: 0.0785 - lr: 1.0000e-05 - 311ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7276 - val_loss: 0.0786 - lr: 1.0000e-05 - 326ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7272 - val_loss: 0.0787 - lr: 1.0000e-05 - 316ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7269 - val_loss: 0.0788 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7265 - val_loss: 0.0789 - lr: 1.0000e-05 - 342ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7262 - val_loss: 0.0790 - lr: 1.0000e-05 - 331ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7258 - val_loss: 0.0791 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7255 - val_loss: 0.0792 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7252 - val_loss: 0.0793 - lr: 1.0000e-05 - 315ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7248 - val_loss: 0.0794 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7245 - val_loss: 0.0796 - lr: 1.0000e-05 - 301ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7241 - val_loss: 0.0797 - lr: 1.0000e-05 - 302ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7238 - val_loss: 0.0798 - lr: 1.0000e-05 - 305ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7234 - val_loss: 0.0799 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7231 - val_loss: 0.0800 - lr: 1.0000e-05 - 333ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7227 - val_loss: 0.0801 - lr: 1.0000e-05 - 325ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7224 - val_loss: 0.0802 - lr: 1.0000e-05 - 318ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7220 - val_loss: 0.0804 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7217 - val_loss: 0.0805 - lr: 1.0000e-05 - 301ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.05277
58/58 - 0s - loss: 0.7213 - val_loss: 0.0806 - lr: 1.0000e-05 - 308ms/epoch - 5ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	51.87% Accuracy
MSE:	 25.081351909450348 
RMSE:	 5.008128583557969 
MAPE:	 3.9377037384058786

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 37.86506779592453 
RMSE:	 6.153459823215273 
MAPE:	 4.830217084369749

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.37812690087662 
RMSE:	 7.306033595657539 
MAPE:	 5.95316326588041

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 133.19678136227444 
RMSE:	 11.541090995320783 
MAPE:	 10.29859546777107

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	50.37% Accuracy
MSE:	 20.693935177088164 
RMSE:	 4.549058713304123 
MAPE:	 3.6577262429810227

MIDPOINT
Prediction vs Close:		48.13% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 18.24421544500263 
RMSE:	 4.271324788049093 
MAPE:	 3.3887721441386436
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.41 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4414.515, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3944.062, Time=0.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.36 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3715.173, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3577.471, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.47 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.58 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3579.471, Time=0.20 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.258 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1784.736
Date:                Sun, 12 Dec 2021   AIC                           3577.471
Time:                        14:47:45   BIC                           3596.235
Sample:                             0   HQIC                          3584.677
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.844      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.861      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.862      0.000      -0.410      -0.387
sigma2         4.9242      0.023    215.469      0.000       4.879       4.969
===================================================================================
Ljung-Box (L1) (Q):                  14.55   Jarque-Bera (JB):           2468024.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       274.15
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04850, saving model to LSTM4.h5
43/43 - 4s - loss: 1.4060 - val_loss: 0.0485 - lr: 0.0010 - 4s/epoch - 88ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.3489 - val_loss: 0.0493 - lr: 0.0010 - 247ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.2894 - val_loss: 0.0512 - lr: 0.0010 - 209ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.2311 - val_loss: 0.0529 - lr: 0.0010 - 228ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.1726 - val_loss: 0.0550 - lr: 0.0010 - 242ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.1181 - val_loss: 0.0574 - lr: 0.0010 - 234ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0859 - val_loss: 0.0576 - lr: 1.0000e-04 - 219ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0812 - val_loss: 0.0579 - lr: 1.0000e-04 - 226ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0765 - val_loss: 0.0582 - lr: 1.0000e-04 - 239ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0720 - val_loss: 0.0585 - lr: 1.0000e-04 - 263ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0675 - val_loss: 0.0588 - lr: 1.0000e-04 - 248ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0647 - val_loss: 0.0588 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0643 - val_loss: 0.0589 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0638 - val_loss: 0.0589 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0634 - val_loss: 0.0589 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0630 - val_loss: 0.0590 - lr: 1.0000e-05 - 244ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0625 - val_loss: 0.0590 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0621 - val_loss: 0.0590 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0616 - val_loss: 0.0591 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0612 - val_loss: 0.0591 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0608 - val_loss: 0.0591 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0603 - val_loss: 0.0592 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0599 - val_loss: 0.0592 - lr: 1.0000e-05 - 285ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0595 - val_loss: 0.0592 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0590 - val_loss: 0.0593 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0586 - val_loss: 0.0593 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0582 - val_loss: 0.0593 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0577 - val_loss: 0.0594 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0573 - val_loss: 0.0594 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0568 - val_loss: 0.0595 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0564 - val_loss: 0.0595 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0560 - val_loss: 0.0595 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0555 - val_loss: 0.0596 - lr: 1.0000e-05 - 237ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0551 - val_loss: 0.0596 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0547 - val_loss: 0.0596 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0542 - val_loss: 0.0597 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0538 - val_loss: 0.0597 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0534 - val_loss: 0.0598 - lr: 1.0000e-05 - 237ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0529 - val_loss: 0.0598 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0525 - val_loss: 0.0598 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0521 - val_loss: 0.0599 - lr: 1.0000e-05 - 247ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0516 - val_loss: 0.0599 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0512 - val_loss: 0.0599 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0508 - val_loss: 0.0600 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0503 - val_loss: 0.0600 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0499 - val_loss: 0.0601 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0495 - val_loss: 0.0601 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0490 - val_loss: 0.0601 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0486 - val_loss: 0.0602 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0482 - val_loss: 0.0602 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04850
43/43 - 0s - loss: 1.0478 - val_loss: 0.0603 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	51.87% Accuracy
MSE:	 25.081351909450348 
RMSE:	 5.008128583557969 
MAPE:	 3.9377037384058786

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 37.86506779592453 
RMSE:	 6.153459823215273 
MAPE:	 4.830217084369749

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.37812690087662 
RMSE:	 7.306033595657539 
MAPE:	 5.95316326588041

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 133.19678136227444 
RMSE:	 11.541090995320783 
MAPE:	 10.29859546777107

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	50.37% Accuracy
MSE:	 20.693935177088164 
RMSE:	 4.549058713304123 
MAPE:	 3.6577262429810227

MIDPOINT
Prediction vs Close:		48.13% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 18.24421544500263 
RMSE:	 4.271324788049093 
MAPE:	 3.3887721441386436

T3
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 74.9216743993189 
RMSE:	 8.655730725901707 
MAPE:	 7.03412901576443
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.45 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4352.703, Time=0.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3889.412, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.28 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3689.930, Time=0.06 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3574.245, Time=0.08 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.23 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.80 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3576.245, Time=0.18 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.171 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1783.123
Date:                Sun, 12 Dec 2021   AIC                           3574.245
Time:                        14:49:16   BIC                           3593.008
Sample:                             0   HQIC                          3581.451
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1480      0.004   -302.430      0.000      -1.155      -1.141
ar.L2         -0.8300      0.008    -99.682      0.000      -0.846      -0.814
ar.L3         -0.3687      0.007    -50.527      0.000      -0.383      -0.354
sigma2         4.9055      0.028    175.970      0.000       4.851       4.960
===================================================================================
Ljung-Box (L1) (Q):                  11.61   Jarque-Bera (JB):           1261976.58
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.16   Skew:                             2.52
Prob(H) (two-sided):                  0.00   Kurtosis:                       196.90
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04545, saving model to LSTM4.h5
90/90 - 4s - loss: 1.3805 - val_loss: 0.0455 - lr: 0.0010 - 4s/epoch - 44ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04545
90/90 - 0s - loss: 1.2203 - val_loss: 0.0467 - lr: 0.0010 - 476ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04545
90/90 - 0s - loss: 1.0618 - val_loss: 0.0520 - lr: 0.0010 - 457ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.9221 - val_loss: 0.0615 - lr: 0.0010 - 446ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.8392 - val_loss: 0.0696 - lr: 0.0010 - 448ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7895 - val_loss: 0.0768 - lr: 0.0010 - 464ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7661 - val_loss: 0.0775 - lr: 1.0000e-04 - 532ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7627 - val_loss: 0.0782 - lr: 1.0000e-04 - 447ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7592 - val_loss: 0.0790 - lr: 1.0000e-04 - 526ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7558 - val_loss: 0.0798 - lr: 1.0000e-04 - 463ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7524 - val_loss: 0.0807 - lr: 1.0000e-04 - 447ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7503 - val_loss: 0.0807 - lr: 1.0000e-05 - 530ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7499 - val_loss: 0.0808 - lr: 1.0000e-05 - 421ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7496 - val_loss: 0.0809 - lr: 1.0000e-05 - 539ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7492 - val_loss: 0.0810 - lr: 1.0000e-05 - 513ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7488 - val_loss: 0.0811 - lr: 1.0000e-05 - 462ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7485 - val_loss: 0.0812 - lr: 1.0000e-05 - 459ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7481 - val_loss: 0.0813 - lr: 1.0000e-05 - 457ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7477 - val_loss: 0.0815 - lr: 1.0000e-05 - 435ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7473 - val_loss: 0.0816 - lr: 1.0000e-05 - 459ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7470 - val_loss: 0.0817 - lr: 1.0000e-05 - 434ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7466 - val_loss: 0.0818 - lr: 1.0000e-05 - 453ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7462 - val_loss: 0.0819 - lr: 1.0000e-05 - 453ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7458 - val_loss: 0.0821 - lr: 1.0000e-05 - 453ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7454 - val_loss: 0.0822 - lr: 1.0000e-05 - 557ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7450 - val_loss: 0.0823 - lr: 1.0000e-05 - 576ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7446 - val_loss: 0.0825 - lr: 1.0000e-05 - 468ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7442 - val_loss: 0.0826 - lr: 1.0000e-05 - 449ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7438 - val_loss: 0.0827 - lr: 1.0000e-05 - 438ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7434 - val_loss: 0.0829 - lr: 1.0000e-05 - 463ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7430 - val_loss: 0.0830 - lr: 1.0000e-05 - 450ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7426 - val_loss: 0.0832 - lr: 1.0000e-05 - 470ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7422 - val_loss: 0.0833 - lr: 1.0000e-05 - 541ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7418 - val_loss: 0.0835 - lr: 1.0000e-05 - 433ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7414 - val_loss: 0.0836 - lr: 1.0000e-05 - 480ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7410 - val_loss: 0.0838 - lr: 1.0000e-05 - 433ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7406 - val_loss: 0.0840 - lr: 1.0000e-05 - 452ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7402 - val_loss: 0.0841 - lr: 1.0000e-05 - 561ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7397 - val_loss: 0.0843 - lr: 1.0000e-05 - 469ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7393 - val_loss: 0.0845 - lr: 1.0000e-05 - 440ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7389 - val_loss: 0.0846 - lr: 1.0000e-05 - 556ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7385 - val_loss: 0.0848 - lr: 1.0000e-05 - 437ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7381 - val_loss: 0.0850 - lr: 1.0000e-05 - 545ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7377 - val_loss: 0.0852 - lr: 1.0000e-05 - 433ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7373 - val_loss: 0.0853 - lr: 1.0000e-05 - 464ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7369 - val_loss: 0.0855 - lr: 1.0000e-05 - 431ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7365 - val_loss: 0.0857 - lr: 1.0000e-05 - 520ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7361 - val_loss: 0.0859 - lr: 1.0000e-05 - 447ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7357 - val_loss: 0.0861 - lr: 1.0000e-05 - 446ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04545
90/90 - 0s - loss: 0.7353 - val_loss: 0.0863 - lr: 1.0000e-05 - 470ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04545
90/90 - 1s - loss: 0.7349 - val_loss: 0.0865 - lr: 1.0000e-05 - 510ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	51.87% Accuracy
MSE:	 25.081351909450348 
RMSE:	 5.008128583557969 
MAPE:	 3.9377037384058786

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 37.86506779592453 
RMSE:	 6.153459823215273 
MAPE:	 4.830217084369749

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.37812690087662 
RMSE:	 7.306033595657539 
MAPE:	 5.95316326588041

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 133.19678136227444 
RMSE:	 11.541090995320783 
MAPE:	 10.29859546777107

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	50.37% Accuracy
MSE:	 20.693935177088164 
RMSE:	 4.549058713304123 
MAPE:	 3.6577262429810227

MIDPOINT
Prediction vs Close:		48.13% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 18.24421544500263 
RMSE:	 4.271324788049093 
MAPE:	 3.3887721441386436

T3
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 74.9216743993189 
RMSE:	 8.655730725901707 
MAPE:	 7.03412901576443

TEMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 33.63623508699449 
RMSE:	 5.799675429452453 
MAPE:	 5.291282498785388
Runtime: mins: 11.956794810899995

Architecture Used

In [108]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
In [109]:
img = cv2.imread('Experiment4.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[109]:
<matplotlib.image.AxesImage at 0x7f7663294ad0>

Model Plots

In [103]:
with open('simulation4_data.json') as json_file:
    simulation4 = json.load(json_file)
fileimg = 'Experiment4'
In [104]:
for i in range(len(list(simulation4.keys()))):
  SIM = list(simulation4.keys())[i]
  plot_train(simulation4,SIM)
  plot_test(simulation4,SIM)
----- Train RMSE for SMA ----- 2.1639669204351253
----- Train_MSE_LSTM for SMA ----- 4.68275283273748
----- Train MAE LSTM for SMA ----- 2.1075676549779305
----- Test RMSE for SMA----- 5.008128583557969
----- Test_MSE_LSTM for SMA----- 25.081351909450348
----- Test_MAE_LSTM for SMA----- 3.9377037384058786
----- Train RMSE for EMA ----- 5.204058044811815
----- Train_MSE_LSTM for EMA ----- 27.082220133770573
----- Train MAE LSTM for EMA ----- 5.18856418958985
----- Test RMSE for EMA----- 6.153459823215273
----- Test_MSE_LSTM for EMA----- 37.86506779592453
----- Test_MAE_LSTM for EMA----- 4.830217084369749
----- Train RMSE for WMA ----- 2.3492817584355308
----- Train_MSE_LSTM for WMA ----- 5.519124780517939
----- Train MAE LSTM for WMA ----- 2.050265444387304
----- Test RMSE for WMA----- 7.306033595657539
----- Test_MSE_LSTM for WMA----- 53.37812690087662
----- Test_MAE_LSTM for WMA----- 5.95316326588041
----- Train RMSE for DEMA ----- 5.563977433104181
----- Train_MSE_LSTM for DEMA ----- 30.957844876092594
----- Train MAE LSTM for DEMA ----- 5.534693951653962
----- Test RMSE for DEMA----- 11.541090995320783
----- Test_MSE_LSTM for DEMA----- 133.19678136227444
----- Test_MAE_LSTM for DEMA----- 10.29859546777107
----- Train RMSE for KAMA ----- 1.9630444354122671
----- Train_MSE_LSTM for KAMA ----- 3.8535434554030665
----- Train MAE LSTM for KAMA ----- 1.943726107625678
----- Test RMSE for KAMA----- 4.549058713304123
----- Test_MSE_LSTM for KAMA----- 20.693935177088164
----- Test_MAE_LSTM for KAMA----- 3.6577262429810227
----- Train RMSE for MIDPOINT ----- 4.428587416955797
----- Train_MSE_LSTM for MIDPOINT ----- 19.612386509619217
----- Train MAE LSTM for MIDPOINT ----- 4.4129323463628785
----- Test RMSE for MIDPOINT----- 4.271324788049093
----- Test_MSE_LSTM for MIDPOINT----- 18.24421544500263
----- Test_MAE_LSTM for MIDPOINT----- 3.3887721441386436
----- Train RMSE for T3 ----- 3.3089997289158237
----- Train_MSE_LSTM for T3 ----- 10.949479205964995
----- Train MAE LSTM for T3 ----- 3.209559452415693
----- Test RMSE for T3----- 8.655730725901707
----- Test_MSE_LSTM for T3----- 74.9216743993189
----- Test_MAE_LSTM for T3----- 7.03412901576443
----- Train RMSE for TEMA ----- 1.0554541673050564
----- Train_MSE_LSTM for TEMA ----- 1.1139834992816098
----- Train MAE LSTM for TEMA ----- 0.602800428277195
----- Test RMSE for TEMA----- 5.799675429452453
----- Test_MSE_LSTM for TEMA----- 33.63623508699449
----- Test_MAE_LSTM for TEMA----- 5.291282498785388

Arima w Exogenous Variable Multistep MutiVariate LSTM Hybrid Model Experiment 5

In [114]:
def get_arima_exog(dataframe,original_data, train_len, test_len):    
    

    # prepare train and test data for exogenous vr
    X_value = pd.DataFrame(low_vol.iloc[:, :])
    y_value = pd.DataFrame(low_vol.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    X_scale_dataset = X_scaler.fit_transform(X_value)
    y_scale_dataset = y_scaler.fit_transform(y_value)
    # Get data and check shape
    # X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X_scale_dataset)
    y_train, y_test, = split_train_test(y_scale_dataset)
    yc_train,yc_test = split_train_test(low_vol_data)
    yc = yc_test.values.tolist()
    y_train_list = y_train.flatten().tolist()
    y_test_list = y_test.flatten().tolist()
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)

    # Initialize model
    model = auto_arima(y_train_list,exogenous  = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
            suppress_warnings=True,stepwise=True,seasonal=True)

      # Determine model parameters
    print(model.summary())
    model.fit(y_train_list,maxiter=200)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

      # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
        model = pmdarima.ARIMA(order=order)
        model.fit(y_train_list)
        # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')

        prediction.append(model.predict()[0])
        y_train_list.append(y_test_list[i])

    predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
    y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))

    # Generate error data
    mse = mean_squared_error(yc_test, predictionte)
    rmse = mse ** 0.5
    mae = mean_absolute_error(y_test_ , predictionte )
    return yc,predictionte.flatten().tolist(), mse, rmse, mae
In [115]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    model = Sequential()
    model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    model.add(Dense(units=64,activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(units=output_dim))
    model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    ## Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM5.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()


    # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM5.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [116]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation5 = {}
    imgfile = 'Experiment5'
    for ma in optimized_period:
                print(ma)
                print(functions[ma])
                print ( int( optimized_period[ma]))
              # if ma == 'SMA':
                low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
                low_vol = low_vol.fillna(0)
                low_vol_data = df['close']
                high_vol = pd.DataFrame()
                df2 = df.copy()
                for i in df2.columns:
                  if i in low_vol.columns:
                    high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
                high_vol_data = df['close']
                ## *****************************************************
                # Generate ARIMA and LSTM predictions
                print('\nWorking on ' + ma + ' predictions')
                try:
                  print('parameters used : ', train_len, test_len)
                  low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
                except:
                    print('ARIMA error, skipping to next MA type')
                    continue
                Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
                final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
                mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
                rmse_ftr = mse_ftr ** 0.5
                mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
                mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

                final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
                mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
                rmse = mse ** 0.5
                mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                # Generate prediction accuracy
                actual = df['close'].tail(test_len).values
                result_1 = []
                result_2 = []
                for i in range(1, len(final_prediction)):
                    # Compare prediction to previous close price
                    if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                        result_1.append(1)
                    elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                        result_1.append(1)
                    else:
                        result_1.append(0)

                    # Compare prediction to previous prediction
                    if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                        result_2.append(1)
                    elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                        result_2.append(1)
                    else:
                        result_2.append(0)

                accuracy_1 = np.mean(result_1)
                accuracy_2 = np.mean(result_2)

                simulation5[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                              'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                  'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                              'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                  'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                              'rmse': rmse_ftr, 'mae' : mae_ftr},
                                  'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                            'rmse': rmse, 'mae': mae },
                                  'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

                # save simulation data here as checkpoint
                with open('simulation5_data.json', 'w') as fp:
                    json.dump(simulation5, fp)

                for ma in simulation5.keys():
                    print('\n' + ma)
                    print('Prediction vs Close:\t\t' + str(round(100*simulation5[ma]['accuracy']['prediction vs close'], 2))
                          + '% Accuracy')
                    print('Prediction vs Prediction:\t' + str(round(100*simulation5[ma]['accuracy']['prediction vs prediction'], 2))
                          + '% Accuracy')
                    print('MSE:\t', simulation5[ma]['final']['mse'],
                          '\nRMSE:\t', simulation5[ma]['final']['rmse'],
                          '\nMAPE:\t', simulation5[ma]['final']['mae'])#,
                          # '\nMAPE:\t', simulation[ma]['final']['mape'])
              # else:
              #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.786, Time=3.27 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.592, Time=4.65 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15578.394, Time=8.47 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.592, Time=6.72 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16966.361, Time=9.33 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16121.635, Time=9.70 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17214.069, Time=13.01 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.592, Time=9.54 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-14572.319, Time=9.31 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-14403.474, Time=41.51 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 115.521 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8634.035
Date:                Sun, 12 Dec 2021   AIC                         -17214.069
Time:                        15:07:42   BIC                         -17087.416
Sample:                             0   HQIC                        -17165.429
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -4.257e-09   9.55e-06     -0.000      1.000   -1.87e-05    1.87e-05
x2         -4.256e-09   9.56e-06     -0.000      1.000   -1.87e-05    1.87e-05
x3         -4.313e-09   9.62e-06     -0.000      1.000   -1.89e-05    1.88e-05
x4             1.0000   9.61e-06   1.04e+05      0.000       1.000       1.000
x5         -3.891e-09   9.14e-06     -0.000      1.000   -1.79e-05    1.79e-05
x6         -1.122e-08   1.03e-05     -0.001      0.999   -2.03e-05    2.03e-05
x7         -4.223e-09   9.54e-06     -0.000      1.000   -1.87e-05    1.87e-05
x8         -4.234e-09   9.55e-06     -0.000      1.000   -1.87e-05    1.87e-05
x9         -1.626e-10   6.54e-07     -0.000      1.000   -1.28e-06    1.28e-06
x10        -6.831e-10   2.91e-06     -0.000      1.000    -5.7e-06     5.7e-06
x11        -4.115e-09   9.41e-06     -0.000      1.000   -1.84e-05    1.84e-05
x12        -4.303e-09   9.62e-06     -0.000      1.000   -1.89e-05    1.88e-05
x13        -4.288e-09    9.6e-06     -0.000      1.000   -1.88e-05    1.88e-05
x14        -3.749e-08   2.81e-05     -0.001      0.999   -5.51e-05     5.5e-05
x15        -5.032e-09   1.04e-05     -0.000      1.000   -2.04e-05    2.03e-05
x16        -3.685e-09      9e-06     -0.000      1.000   -1.76e-05    1.76e-05
x17        -3.286e-09   8.45e-06     -0.000      1.000   -1.66e-05    1.66e-05
x18         -1.22e-08   1.59e-05     -0.001      0.999   -3.11e-05    3.11e-05
x19        -5.685e-09    1.1e-05     -0.001      1.000   -2.16e-05    2.16e-05
x20         -1.42e-08   1.69e-05     -0.001      0.999   -3.32e-05    3.32e-05
x21        -5.194e-08   3.31e-05     -0.002      0.999   -6.49e-05    6.48e-05
x22        -2.548e-08   2.31e-05     -0.001      0.999   -4.53e-05    4.52e-05
x23        -3.534e-08   2.73e-05     -0.001      0.999   -5.35e-05    5.34e-05
x24        -1.566e-08    1.8e-05     -0.001      0.999   -3.53e-05    3.53e-05
ma.L1         -1.3899   4.98e-09  -2.79e+08      0.000      -1.390      -1.390
ma.L2          0.4032   4.98e-09   8.09e+07      0.000       0.403       0.403
sigma2      7.635e-11   6.92e-11      1.103      0.270   -5.93e-11    2.12e-10
===================================================================================
Ljung-Box (L1) (Q):                  68.48   Jarque-Bera (JB):           5579791.06
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            10.12
Prob(H) (two-sided):                  0.00   Kurtosis:                       410.36
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.69e+24. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

WARNING:tensorflow:Layer lstm_49 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_49 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.56547, saving model to LSTM5.h5
48/48 - 2s - loss: 0.2090 - val_loss: 0.5655 - lr: 0.0010 - 2s/epoch - 43ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.56547 to 0.03066, saving model to LSTM5.h5
48/48 - 0s - loss: 0.1069 - val_loss: 0.0307 - lr: 0.0010 - 405ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03066
48/48 - 0s - loss: 0.2029 - val_loss: 0.6358 - lr: 0.0010 - 454ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.03066 to 0.02836, saving model to LSTM5.h5
48/48 - 0s - loss: 0.0706 - val_loss: 0.0284 - lr: 0.0010 - 400ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02836
48/48 - 0s - loss: 0.0601 - val_loss: 0.1206 - lr: 0.0010 - 448ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.02836 to 0.01619, saving model to LSTM5.h5
48/48 - 0s - loss: 0.0450 - val_loss: 0.0162 - lr: 0.0010 - 449ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0492 - val_loss: 0.1798 - lr: 0.0010 - 434ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0346 - val_loss: 0.0218 - lr: 0.0010 - 432ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0408 - val_loss: 0.2343 - lr: 0.0010 - 418ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0355 - val_loss: 0.0463 - lr: 0.0010 - 464ms/epoch - 10ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0305 - val_loss: 0.1022 - lr: 0.0010 - 396ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0334 - val_loss: 0.0843 - lr: 1.0000e-04 - 391ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0300 - val_loss: 0.0847 - lr: 1.0000e-04 - 431ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0297 - val_loss: 0.0868 - lr: 1.0000e-04 - 378ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0281 - val_loss: 0.0946 - lr: 1.0000e-04 - 454ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0260 - val_loss: 0.0970 - lr: 1.0000e-04 - 467ms/epoch - 10ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0265 - val_loss: 0.0971 - lr: 1.0000e-05 - 410ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0264 - val_loss: 0.0966 - lr: 1.0000e-05 - 426ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0267 - val_loss: 0.0955 - lr: 1.0000e-05 - 427ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0287 - val_loss: 0.0951 - lr: 1.0000e-05 - 424ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0279 - val_loss: 0.0953 - lr: 1.0000e-05 - 411ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0270 - val_loss: 0.0948 - lr: 1.0000e-05 - 418ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0254 - val_loss: 0.0947 - lr: 1.0000e-05 - 460ms/epoch - 10ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0258 - val_loss: 0.0947 - lr: 1.0000e-05 - 385ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0253 - val_loss: 0.0947 - lr: 1.0000e-05 - 408ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0270 - val_loss: 0.0947 - lr: 1.0000e-05 - 423ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0281 - val_loss: 0.0948 - lr: 1.0000e-05 - 393ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0256 - val_loss: 0.0942 - lr: 1.0000e-05 - 371ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0281 - val_loss: 0.0945 - lr: 1.0000e-05 - 438ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0253 - val_loss: 0.0948 - lr: 1.0000e-05 - 437ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0268 - val_loss: 0.0939 - lr: 1.0000e-05 - 407ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0269 - val_loss: 0.0939 - lr: 1.0000e-05 - 423ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0308 - val_loss: 0.0934 - lr: 1.0000e-05 - 420ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0248 - val_loss: 0.0917 - lr: 1.0000e-05 - 399ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0268 - val_loss: 0.0921 - lr: 1.0000e-05 - 411ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0255 - val_loss: 0.0919 - lr: 1.0000e-05 - 380ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0273 - val_loss: 0.0924 - lr: 1.0000e-05 - 421ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0256 - val_loss: 0.0929 - lr: 1.0000e-05 - 395ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0292 - val_loss: 0.0931 - lr: 1.0000e-05 - 407ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0269 - val_loss: 0.0930 - lr: 1.0000e-05 - 368ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0262 - val_loss: 0.0935 - lr: 1.0000e-05 - 438ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0260 - val_loss: 0.0936 - lr: 1.0000e-05 - 486ms/epoch - 10ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0271 - val_loss: 0.0910 - lr: 1.0000e-05 - 414ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0252 - val_loss: 0.0918 - lr: 1.0000e-05 - 384ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0250 - val_loss: 0.0919 - lr: 1.0000e-05 - 398ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0259 - val_loss: 0.0921 - lr: 1.0000e-05 - 420ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0236 - val_loss: 0.0919 - lr: 1.0000e-05 - 408ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0249 - val_loss: 0.0914 - lr: 1.0000e-05 - 399ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0268 - val_loss: 0.0923 - lr: 1.0000e-05 - 421ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0233 - val_loss: 0.0933 - lr: 1.0000e-05 - 393ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0261 - val_loss: 0.0927 - lr: 1.0000e-05 - 393ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0271 - val_loss: 0.0922 - lr: 1.0000e-05 - 402ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0252 - val_loss: 0.0903 - lr: 1.0000e-05 - 402ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0269 - val_loss: 0.0911 - lr: 1.0000e-05 - 403ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0258 - val_loss: 0.0920 - lr: 1.0000e-05 - 377ms/epoch - 8ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01619
48/48 - 0s - loss: 0.0264 - val_loss: 0.0917 - lr: 1.0000e-05 - 421ms/epoch - 9ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		46.27% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 22.905457367129987 
RMSE:	 4.785964622427749 
MAPE:	 4.003866138329542
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.778, Time=3.17 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.589, Time=4.74 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-14606.447, Time=6.20 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.589, Time=6.93 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15343.613, Time=10.18 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15047.583, Time=13.32 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16858.964, Time=11.76 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17024.022, Time=5.75 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-16998.618, Time=3.58 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17081.451, Time=6.55 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=16.66 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16997.990, Time=3.47 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-16992.667, Time=4.34 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 96.671 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.726
Date:                Sun, 12 Dec 2021   AIC                         -17081.451
Time:                        15:13:34   BIC                         -16945.417
Sample:                             0   HQIC                        -17029.208
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.316e-10   9.89e-05  -2.34e-06      1.000      -0.000       0.000
x2         -2.309e-10   9.88e-05  -2.34e-06      1.000      -0.000       0.000
x3         -2.325e-10   9.91e-05  -2.35e-06      1.000      -0.000       0.000
x4             1.0000    9.9e-05   1.01e+04      0.000       1.000       1.000
x5         -2.108e-10   9.43e-05  -2.24e-06      1.000      -0.000       0.000
x6         -7.997e-10      0.000  -4.63e-06      1.000      -0.000       0.000
x7         -2.295e-10   9.85e-05  -2.33e-06      1.000      -0.000       0.000
x8         -2.244e-10   9.74e-05   -2.3e-06      1.000      -0.000       0.000
x9         -1.166e-11   1.98e-05   -5.9e-07      1.000   -3.87e-05    3.87e-05
x10        -4.454e-11   4.19e-05  -1.06e-06      1.000   -8.22e-05    8.22e-05
x11        -2.219e-10   9.68e-05  -2.29e-06      1.000      -0.000       0.000
x12        -2.264e-10    9.8e-05  -2.31e-06      1.000      -0.000       0.000
x13        -2.315e-10   9.89e-05  -2.34e-06      1.000      -0.000       0.000
x14        -1.767e-09      0.000  -6.47e-06      1.000      -0.001       0.001
x15        -2.096e-10   9.38e-05  -2.23e-06      1.000      -0.000       0.000
x16        -5.257e-10      0.000   -3.5e-06      1.000      -0.000       0.000
x17        -2.143e-10   9.53e-05  -2.25e-06      1.000      -0.000       0.000
x18        -3.776e-11   3.61e-05  -1.05e-06      1.000   -7.08e-05    7.08e-05
x19         -2.52e-10      0.000  -2.41e-06      1.000      -0.000       0.000
x20        -2.417e-10   9.51e-05  -2.54e-06      1.000      -0.000       0.000
x21         -3.16e-09      0.000  -8.64e-06      1.000      -0.001       0.001
x22        -2.955e-09      0.000  -8.32e-06      1.000      -0.001       0.001
x23        -1.664e-09      0.000  -6.29e-06      1.000      -0.001       0.001
x24        -1.568e-09      0.000  -6.07e-06      1.000      -0.001       0.001
ar.L1         -0.4923    1.2e-09  -4.09e+08      0.000      -0.492      -0.492
ar.L2         -0.1923      7e-10  -2.75e+08      0.000      -0.192      -0.192
ar.L3         -0.0461   3.32e-10  -1.39e+08      0.000      -0.046      -0.046
ma.L1         -0.7077   2.73e-09  -2.59e+08      0.000      -0.708      -0.708
sigma2       8.99e-11   6.96e-11      1.291      0.197   -4.66e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.51   Jarque-Bera (JB):           4268313.90
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.44
Prob(H) (two-sided):                  0.00   Kurtosis:                       359.56
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.36e+28. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

WARNING:tensorflow:Layer lstm_50 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_50 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.21948, saving model to LSTM5.h5
16/16 - 2s - loss: 0.6907 - val_loss: 0.2195 - lr: 0.0010 - 2s/epoch - 130ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.21948 to 0.03167, saving model to LSTM5.h5
16/16 - 0s - loss: 0.2540 - val_loss: 0.0317 - lr: 0.0010 - 162ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0860 - val_loss: 0.1915 - lr: 0.0010 - 161ms/epoch - 10ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0661 - val_loss: 0.1009 - lr: 0.0010 - 163ms/epoch - 10ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0508 - val_loss: 0.0429 - lr: 0.0010 - 153ms/epoch - 10ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0497 - val_loss: 0.1203 - lr: 0.0010 - 140ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0461 - val_loss: 0.1094 - lr: 0.0010 - 154ms/epoch - 10ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0424 - val_loss: 0.1108 - lr: 1.0000e-04 - 139ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0430 - val_loss: 0.1112 - lr: 1.0000e-04 - 158ms/epoch - 10ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0397 - val_loss: 0.1123 - lr: 1.0000e-04 - 138ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0406 - val_loss: 0.1096 - lr: 1.0000e-04 - 147ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0431 - val_loss: 0.1044 - lr: 1.0000e-04 - 154ms/epoch - 10ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0408 - val_loss: 0.1044 - lr: 1.0000e-05 - 174ms/epoch - 11ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0399 - val_loss: 0.1044 - lr: 1.0000e-05 - 163ms/epoch - 10ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0426 - val_loss: 0.1045 - lr: 1.0000e-05 - 158ms/epoch - 10ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0406 - val_loss: 0.1052 - lr: 1.0000e-05 - 143ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0433 - val_loss: 0.1052 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0407 - val_loss: 0.1047 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0416 - val_loss: 0.1043 - lr: 1.0000e-05 - 153ms/epoch - 10ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0422 - val_loss: 0.1041 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0394 - val_loss: 0.1038 - lr: 1.0000e-05 - 177ms/epoch - 11ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0391 - val_loss: 0.1034 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0392 - val_loss: 0.1031 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0398 - val_loss: 0.1035 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0357 - val_loss: 0.1032 - lr: 1.0000e-05 - 157ms/epoch - 10ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0371 - val_loss: 0.1032 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0396 - val_loss: 0.1033 - lr: 1.0000e-05 - 155ms/epoch - 10ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0390 - val_loss: 0.1028 - lr: 1.0000e-05 - 156ms/epoch - 10ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0421 - val_loss: 0.1028 - lr: 1.0000e-05 - 154ms/epoch - 10ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0409 - val_loss: 0.1021 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0422 - val_loss: 0.1018 - lr: 1.0000e-05 - 195ms/epoch - 12ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0390 - val_loss: 0.1017 - lr: 1.0000e-05 - 218ms/epoch - 14ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0424 - val_loss: 0.1012 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0418 - val_loss: 0.1011 - lr: 1.0000e-05 - 177ms/epoch - 11ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0394 - val_loss: 0.1007 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0395 - val_loss: 0.1006 - lr: 1.0000e-05 - 163ms/epoch - 10ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0377 - val_loss: 0.1003 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0432 - val_loss: 0.1001 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0366 - val_loss: 0.0997 - lr: 1.0000e-05 - 158ms/epoch - 10ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0355 - val_loss: 0.0997 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0424 - val_loss: 0.0994 - lr: 1.0000e-05 - 165ms/epoch - 10ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0404 - val_loss: 0.0990 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0370 - val_loss: 0.0990 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0396 - val_loss: 0.0997 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0402 - val_loss: 0.1002 - lr: 1.0000e-05 - 156ms/epoch - 10ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0368 - val_loss: 0.0999 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0410 - val_loss: 0.0996 - lr: 1.0000e-05 - 143ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0392 - val_loss: 0.0985 - lr: 1.0000e-05 - 169ms/epoch - 11ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0401 - val_loss: 0.0980 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0356 - val_loss: 0.0976 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0376 - val_loss: 0.0983 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.03167
16/16 - 0s - loss: 0.0362 - val_loss: 0.0983 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		46.27% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 22.905457367129987 
RMSE:	 4.785964622427749 
MAPE:	 4.003866138329542

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.03% Accuracy
MSE:	 24.329194491765858 
RMSE:	 4.932463328983385 
MAPE:	 4.099346553838579
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.780, Time=3.20 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.589, Time=4.50 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16789.784, Time=12.14 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.589, Time=7.21 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16919.987, Time=9.59 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-14616.097, Time=12.21 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17225.955, Time=18.57 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.589, Time=9.57 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-15582.364, Time=19.20 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-12043.670, Time=36.41 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 132.614 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8639.977
Date:                Sun, 12 Dec 2021   AIC                         -17225.955
Time:                        15:24:54   BIC                         -17099.302
Sample:                             0   HQIC                        -17177.315
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -4.802e-09   4.51e-06     -0.001      0.999   -8.84e-06    8.83e-06
x2         -4.783e-09    4.5e-06     -0.001      0.999   -8.83e-06    8.82e-06
x3         -4.811e-09   4.51e-06     -0.001      0.999   -8.85e-06    8.84e-06
x4             1.0000   4.51e-06   2.22e+05      0.000       1.000       1.000
x5         -4.353e-09    4.3e-06     -0.001      0.999   -8.43e-06    8.42e-06
x6         -1.569e-08   7.54e-06     -0.002      0.998   -1.48e-05    1.48e-05
x7          -4.75e-09   4.49e-06     -0.001      0.999    -8.8e-06    8.79e-06
x8         -4.628e-09   4.43e-06     -0.001      0.999   -8.69e-06    8.69e-06
x9         -4.733e-10   1.16e-06     -0.000      1.000   -2.27e-06    2.27e-06
x10         -7.88e-10    1.8e-06     -0.000      1.000   -3.52e-06    3.52e-06
x11        -4.609e-09   4.42e-06     -0.001      0.999   -8.68e-06    8.67e-06
x12        -4.607e-09   4.42e-06     -0.001      0.999   -8.68e-06    8.67e-06
x13        -4.792e-09   4.51e-06     -0.001      0.999   -8.84e-06    8.83e-06
x14        -3.777e-08   1.24e-05     -0.003      0.998   -2.44e-05    2.44e-05
x15         -3.99e-09   4.12e-06     -0.001      0.999   -8.08e-06    8.07e-06
x16        -1.309e-08   7.41e-06     -0.002      0.999   -1.45e-05    1.45e-05
x17        -4.789e-09   4.51e-06     -0.001      0.999   -8.85e-06    8.84e-06
x18        -2.665e-10   9.77e-07     -0.000      1.000   -1.92e-06    1.92e-06
x19        -4.919e-09   4.56e-06     -0.001      0.999   -8.94e-06    8.93e-06
x20            -4e-10   9.58e-07     -0.000      1.000   -1.88e-06    1.88e-06
x21        -6.782e-08   1.67e-05     -0.004      0.997   -3.27e-05    3.26e-05
x22         -6.03e-08   1.58e-05     -0.004      0.997   -3.09e-05    3.08e-05
x23        -3.157e-08   1.14e-05     -0.003      0.998   -2.23e-05    2.23e-05
x24        -3.671e-08   1.23e-05     -0.003      0.998   -2.41e-05    2.41e-05
ma.L1         -1.3901   5.58e-10  -2.49e+09      0.000      -1.390      -1.390
ma.L2          0.4033   5.75e-10   7.02e+08      0.000       0.403       0.403
sigma2      7.525e-11   6.92e-11      1.088      0.277   -6.03e-11    2.11e-10
===================================================================================
Ljung-Box (L1) (Q):                  69.18   Jarque-Bera (JB):           6366427.21
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            12.29
Prob(H) (two-sided):                  0.00   Kurtosis:                       437.97
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.29e+25. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

WARNING:tensorflow:Layer lstm_51 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_51 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.02747, saving model to LSTM5.h5
17/17 - 2s - loss: 0.2646 - val_loss: 0.0275 - lr: 0.0010 - 2s/epoch - 103ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.1004 - val_loss: 0.3699 - lr: 0.0010 - 139ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0793 - val_loss: 0.1598 - lr: 0.0010 - 153ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0976 - val_loss: 0.1705 - lr: 0.0010 - 168ms/epoch - 10ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0784 - val_loss: 0.0960 - lr: 0.0010 - 158ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0657 - val_loss: 0.0759 - lr: 0.0010 - 186ms/epoch - 11ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0720 - val_loss: 0.0821 - lr: 1.0000e-04 - 161ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0512 - val_loss: 0.0881 - lr: 1.0000e-04 - 150ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0349 - val_loss: 0.0932 - lr: 1.0000e-04 - 154ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0368 - val_loss: 0.0940 - lr: 1.0000e-04 - 163ms/epoch - 10ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0340 - val_loss: 0.0919 - lr: 1.0000e-04 - 149ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0345 - val_loss: 0.0917 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0324 - val_loss: 0.0916 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0359 - val_loss: 0.0916 - lr: 1.0000e-05 - 156ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0320 - val_loss: 0.0915 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0334 - val_loss: 0.0917 - lr: 1.0000e-05 - 172ms/epoch - 10ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0374 - val_loss: 0.0916 - lr: 1.0000e-05 - 165ms/epoch - 10ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0306 - val_loss: 0.0917 - lr: 1.0000e-05 - 166ms/epoch - 10ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0337 - val_loss: 0.0918 - lr: 1.0000e-05 - 156ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0340 - val_loss: 0.0919 - lr: 1.0000e-05 - 207ms/epoch - 12ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0338 - val_loss: 0.0918 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0330 - val_loss: 0.0919 - lr: 1.0000e-05 - 169ms/epoch - 10ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0339 - val_loss: 0.0919 - lr: 1.0000e-05 - 160ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0328 - val_loss: 0.0916 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0318 - val_loss: 0.0911 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0317 - val_loss: 0.0909 - lr: 1.0000e-05 - 160ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0377 - val_loss: 0.0908 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0348 - val_loss: 0.0911 - lr: 1.0000e-05 - 160ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0307 - val_loss: 0.0915 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0305 - val_loss: 0.0915 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0339 - val_loss: 0.0915 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0322 - val_loss: 0.0914 - lr: 1.0000e-05 - 167ms/epoch - 10ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0309 - val_loss: 0.0912 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0319 - val_loss: 0.0916 - lr: 1.0000e-05 - 153ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0319 - val_loss: 0.0916 - lr: 1.0000e-05 - 177ms/epoch - 10ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0347 - val_loss: 0.0916 - lr: 1.0000e-05 - 167ms/epoch - 10ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0310 - val_loss: 0.0919 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0330 - val_loss: 0.0920 - lr: 1.0000e-05 - 163ms/epoch - 10ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0351 - val_loss: 0.0923 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0327 - val_loss: 0.0923 - lr: 1.0000e-05 - 165ms/epoch - 10ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0314 - val_loss: 0.0923 - lr: 1.0000e-05 - 163ms/epoch - 10ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0346 - val_loss: 0.0926 - lr: 1.0000e-05 - 153ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0323 - val_loss: 0.0935 - lr: 1.0000e-05 - 156ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0310 - val_loss: 0.0932 - lr: 1.0000e-05 - 176ms/epoch - 10ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0330 - val_loss: 0.0934 - lr: 1.0000e-05 - 156ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0313 - val_loss: 0.0935 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0334 - val_loss: 0.0937 - lr: 1.0000e-05 - 161ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0304 - val_loss: 0.0933 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0335 - val_loss: 0.0932 - lr: 1.0000e-05 - 167ms/epoch - 10ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0323 - val_loss: 0.0934 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0295 - val_loss: 0.0933 - lr: 1.0000e-05 - 159ms/epoch - 9ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		46.27% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 22.905457367129987 
RMSE:	 4.785964622427749 
MAPE:	 4.003866138329542

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.03% Accuracy
MSE:	 24.329194491765858 
RMSE:	 4.932463328983385 
MAPE:	 4.099346553838579

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 26.731339639305343 
RMSE:	 5.170235936522176 
MAPE:	 4.142288801040536
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.785, Time=3.45 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.588, Time=4.62 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15575.689, Time=9.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.588, Time=7.23 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16714.796, Time=8.90 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15610.140, Time=10.43 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17225.835, Time=22.86 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.588, Time=8.58 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16751.951, Time=20.65 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-11788.089, Time=30.91 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 126.719 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8639.917
Date:                Sun, 12 Dec 2021   AIC                         -17225.835
Time:                        15:31:29   BIC                         -17099.182
Sample:                             0   HQIC                        -17177.195
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.894e-09   3.61e-06     -0.002      0.999   -7.09e-06    7.08e-06
x2          -5.93e-09   3.63e-06     -0.002      0.999   -7.11e-06     7.1e-06
x3         -5.905e-09   3.62e-06     -0.002      0.999    -7.1e-06    7.09e-06
x4             1.0000   3.62e-06   2.76e+05      0.000       1.000       1.000
x5         -5.457e-09   3.48e-06     -0.002      0.999   -6.83e-06    6.82e-06
x6         -3.019e-08   7.72e-06     -0.004      0.997   -1.52e-05    1.51e-05
x7          -5.87e-09   3.61e-06     -0.002      0.999   -7.08e-06    7.07e-06
x8         -5.809e-09   3.59e-06     -0.002      0.999   -7.05e-06    7.04e-06
x9         -9.293e-11   9.83e-08     -0.001      0.999   -1.93e-07    1.93e-07
x10        -2.793e-09   2.47e-06     -0.001      0.999   -4.84e-06    4.84e-06
x11        -6.095e-09   3.68e-06     -0.002      0.999   -7.21e-06     7.2e-06
x12        -5.478e-09   3.49e-06     -0.002      0.999   -6.85e-06    6.84e-06
x13         -5.91e-09   3.62e-06     -0.002      0.999    -7.1e-06    7.09e-06
x14        -4.085e-08   9.35e-06     -0.004      0.997   -1.84e-05    1.83e-05
x15         -5.93e-09   3.63e-06     -0.002      0.999   -7.12e-06    7.11e-06
x16        -1.618e-09   1.92e-06     -0.001      0.999   -3.76e-06    3.75e-06
x17        -5.076e-09   3.37e-06     -0.002      0.999    -6.6e-06    6.59e-06
x18        -1.377e-08    5.5e-06     -0.003      0.998   -1.08e-05    1.08e-05
x19        -6.135e-09   3.69e-06     -0.002      0.999   -7.23e-06    7.22e-06
x20        -1.018e-08   4.43e-06     -0.002      0.998   -8.68e-06    8.66e-06
x21        -6.911e-08   1.21e-05     -0.006      0.995   -2.39e-05    2.37e-05
x22        -5.656e-08    1.1e-05     -0.005      0.996   -2.16e-05    2.15e-05
x23        -5.355e-08   1.07e-05     -0.005      0.996    -2.1e-05    2.09e-05
x24        -3.636e-08   8.85e-06     -0.004      0.997   -1.74e-05    1.73e-05
ma.L1         -1.3899   4.86e-11  -2.86e+10      0.000      -1.390      -1.390
ma.L2          0.4032    4.6e-11   8.76e+09      0.000       0.403       0.403
sigma2      7.526e-11   6.92e-11      1.088      0.277   -6.03e-11    2.11e-10
===================================================================================
Ljung-Box (L1) (Q):                  69.65   Jarque-Bera (JB):           6422892.15
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            12.42
Prob(H) (two-sided):                  0.00   Kurtosis:                       439.89
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.74e+29. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

WARNING:tensorflow:Layer lstm_52 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_52 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.84951, saving model to LSTM5.h5
10/10 - 2s - loss: 0.2793 - val_loss: 0.8495 - lr: 0.0010 - 2s/epoch - 166ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.84951 to 0.03753, saving model to LSTM5.h5
10/10 - 0s - loss: 0.0845 - val_loss: 0.0375 - lr: 0.0010 - 150ms/epoch - 15ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0633 - val_loss: 0.1873 - lr: 0.0010 - 146ms/epoch - 15ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0523 - val_loss: 0.4476 - lr: 0.0010 - 111ms/epoch - 11ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0470 - val_loss: 0.3455 - lr: 0.0010 - 108ms/epoch - 11ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0551 - val_loss: 0.1638 - lr: 0.0010 - 109ms/epoch - 11ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0432 - val_loss: 0.1560 - lr: 0.0010 - 104ms/epoch - 10ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0419 - val_loss: 0.1590 - lr: 1.0000e-04 - 92ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0365 - val_loss: 0.1727 - lr: 1.0000e-04 - 97ms/epoch - 10ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0386 - val_loss: 0.1835 - lr: 1.0000e-04 - 132ms/epoch - 13ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0384 - val_loss: 0.1881 - lr: 1.0000e-04 - 100ms/epoch - 10ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0342 - val_loss: 0.2006 - lr: 1.0000e-04 - 117ms/epoch - 12ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0372 - val_loss: 0.2009 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0342 - val_loss: 0.2012 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0348 - val_loss: 0.2015 - lr: 1.0000e-05 - 116ms/epoch - 12ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0374 - val_loss: 0.2014 - lr: 1.0000e-05 - 111ms/epoch - 11ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0333 - val_loss: 0.2012 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0353 - val_loss: 0.2006 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0338 - val_loss: 0.1999 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0348 - val_loss: 0.1989 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0348 - val_loss: 0.1979 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0390 - val_loss: 0.1979 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0363 - val_loss: 0.1990 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0354 - val_loss: 0.1984 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0327 - val_loss: 0.1976 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0339 - val_loss: 0.1970 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0350 - val_loss: 0.1959 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0373 - val_loss: 0.1950 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0361 - val_loss: 0.1931 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0352 - val_loss: 0.1914 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0342 - val_loss: 0.1902 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0344 - val_loss: 0.1885 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0362 - val_loss: 0.1871 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0352 - val_loss: 0.1860 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0355 - val_loss: 0.1847 - lr: 1.0000e-05 - 128ms/epoch - 13ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0363 - val_loss: 0.1846 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0348 - val_loss: 0.1840 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0353 - val_loss: 0.1837 - lr: 1.0000e-05 - 116ms/epoch - 12ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0367 - val_loss: 0.1833 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0352 - val_loss: 0.1834 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0345 - val_loss: 0.1835 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0354 - val_loss: 0.1850 - lr: 1.0000e-05 - 111ms/epoch - 11ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0361 - val_loss: 0.1866 - lr: 1.0000e-05 - 110ms/epoch - 11ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0341 - val_loss: 0.1868 - lr: 1.0000e-05 - 112ms/epoch - 11ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0345 - val_loss: 0.1869 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0343 - val_loss: 0.1881 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0363 - val_loss: 0.1892 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0347 - val_loss: 0.1887 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0351 - val_loss: 0.1883 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0342 - val_loss: 0.1872 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0385 - val_loss: 0.1874 - lr: 1.0000e-05 - 112ms/epoch - 11ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.03753
10/10 - 0s - loss: 0.0331 - val_loss: 0.1878 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		46.27% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 22.905457367129987 
RMSE:	 4.785964622427749 
MAPE:	 4.003866138329542

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.03% Accuracy
MSE:	 24.329194491765858 
RMSE:	 4.932463328983385 
MAPE:	 4.099346553838579

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 26.731339639305343 
RMSE:	 5.170235936522176 
MAPE:	 4.142288801040536

DEMA
Prediction vs Close:		48.13% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 20.555600166870196 
RMSE:	 4.53382842274277 
MAPE:	 3.6522177332314283
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16921.943, Time=10.75 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.592, Time=4.54 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16797.275, Time=9.33 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.592, Time=7.05 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16996.465, Time=3.51 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16999.509, Time=3.29 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17171.315, Time=6.52 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-16994.523, Time=3.87 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=-15518.026, Time=29.54 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 78.397 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood                8613.658
Date:                Sun, 12 Dec 2021   AIC                         -17171.315
Time:                        15:37:20   BIC                         -17039.972
Sample:                             0   HQIC                        -17120.874
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          -5.14e-10    7.6e-05  -6.76e-06      1.000      -0.000       0.000
x2         -5.041e-10   7.52e-05   -6.7e-06      1.000      -0.000       0.000
x3         -4.834e-10   7.38e-05  -6.55e-06      1.000      -0.000       0.000
x4             1.0000   7.46e-05   1.34e+04      0.000       1.000       1.000
x5         -4.462e-10   7.09e-05  -6.29e-06      1.000      -0.000       0.000
x6         -3.064e-09      0.000  -1.84e-05      1.000      -0.000       0.000
x7         -4.751e-10   7.35e-05  -6.46e-06      1.000      -0.000       0.000
x8         -4.628e-10   7.28e-05  -6.36e-06      1.000      -0.000       0.000
x9          -9.21e-11   9.37e-06  -9.83e-06      1.000   -1.84e-05    1.84e-05
x10        -2.165e-10    3.1e-05  -6.98e-06      1.000   -6.08e-05    6.08e-05
x11        -4.665e-10   7.28e-05  -6.41e-06      1.000      -0.000       0.000
x12         -4.62e-10   7.23e-05  -6.39e-06      1.000      -0.000       0.000
x13        -4.906e-10   7.43e-05   -6.6e-06      1.000      -0.000       0.000
x14        -3.985e-09      0.000  -1.87e-05      1.000      -0.000       0.000
x15        -4.897e-10   7.48e-05  -6.55e-06      1.000      -0.000       0.000
x16        -7.327e-10   9.24e-05  -7.93e-06      1.000      -0.000       0.000
x17        -4.173e-10   6.93e-05  -6.02e-06      1.000      -0.000       0.000
x18        -3.397e-10   6.02e-05  -5.64e-06      1.000      -0.000       0.000
x19        -6.012e-10    8.3e-05  -7.25e-06      1.000      -0.000       0.000
x20         -9.09e-10      0.000  -9.05e-06      1.000      -0.000       0.000
x21        -6.188e-09      0.000  -2.32e-05      1.000      -0.001       0.001
x22        -1.992e-09      0.000  -1.33e-05      1.000      -0.000       0.000
x23        -3.669e-09      0.000  -1.79e-05      1.000      -0.000       0.000
x24        -1.065e-09      0.000  -1.01e-05      1.000      -0.000       0.000
ar.L1         -1.2073   5.73e-10  -2.11e+09      0.000      -1.207      -1.207
ar.L2         -0.9083   5.93e-10  -1.53e+09      0.000      -0.908      -0.908
ar.L3         -0.4033   5.84e-10  -6.91e+08      0.000      -0.403      -0.403
sigma2       8.06e-11   6.94e-11      1.162      0.245   -5.54e-11    2.17e-10
===================================================================================
Ljung-Box (L1) (Q):                  13.77   Jarque-Bera (JB):           2436796.68
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             4.07
Prob(H) (two-sided):                  0.00   Kurtosis:                       272.41
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.23e+28. Standard errors may be unstable.
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_53 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_53 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.31708, saving model to LSTM5.h5
45/45 - 2s - loss: 0.1855 - val_loss: 0.3171 - lr: 0.0010 - 2s/epoch - 44ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.31708
45/45 - 0s - loss: 0.1562 - val_loss: 0.7818 - lr: 0.0010 - 363ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.31708
45/45 - 0s - loss: 0.0779 - val_loss: 0.5710 - lr: 0.0010 - 384ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.31708 to 0.21579, saving model to LSTM5.h5
45/45 - 0s - loss: 0.0597 - val_loss: 0.2158 - lr: 0.0010 - 416ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.21579 to 0.13465, saving model to LSTM5.h5
45/45 - 0s - loss: 0.0558 - val_loss: 0.1346 - lr: 0.0010 - 449ms/epoch - 10ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.13465 to 0.10282, saving model to LSTM5.h5
45/45 - 0s - loss: 0.0428 - val_loss: 0.1028 - lr: 0.0010 - 434ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.10282 to 0.09417, saving model to LSTM5.h5
45/45 - 0s - loss: 0.0417 - val_loss: 0.0942 - lr: 0.0010 - 419ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.09417 to 0.06728, saving model to LSTM5.h5
45/45 - 0s - loss: 0.0325 - val_loss: 0.0673 - lr: 0.0010 - 419ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.06728 to 0.02801, saving model to LSTM5.h5
45/45 - 0s - loss: 0.0371 - val_loss: 0.0280 - lr: 0.0010 - 372ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02801
45/45 - 0s - loss: 0.0375 - val_loss: 0.1020 - lr: 0.0010 - 390ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.02801 to 0.02044, saving model to LSTM5.h5
45/45 - 0s - loss: 0.0345 - val_loss: 0.0204 - lr: 0.0010 - 377ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.02044
45/45 - 0s - loss: 0.0361 - val_loss: 0.1002 - lr: 0.0010 - 370ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02044
45/45 - 0s - loss: 0.0303 - val_loss: 0.0225 - lr: 0.0010 - 361ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02044
45/45 - 0s - loss: 0.0317 - val_loss: 0.0947 - lr: 0.0010 - 368ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.02044 to 0.01486, saving model to LSTM5.h5
45/45 - 0s - loss: 0.0339 - val_loss: 0.0149 - lr: 0.0010 - 378ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01486
45/45 - 0s - loss: 0.0341 - val_loss: 0.1791 - lr: 0.0010 - 363ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01486
45/45 - 0s - loss: 0.0316 - val_loss: 0.0240 - lr: 0.0010 - 354ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01486
45/45 - 0s - loss: 0.0266 - val_loss: 0.0898 - lr: 0.0010 - 386ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.01486 to 0.01280, saving model to LSTM5.h5
45/45 - 0s - loss: 0.0376 - val_loss: 0.0128 - lr: 0.0010 - 423ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01280
45/45 - 0s - loss: 0.0332 - val_loss: 0.0765 - lr: 0.0010 - 363ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.01280 to 0.01098, saving model to LSTM5.h5
45/45 - 0s - loss: 0.0252 - val_loss: 0.0110 - lr: 0.0010 - 412ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01098
45/45 - 0s - loss: 0.0326 - val_loss: 0.1760 - lr: 0.0010 - 414ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.01098 to 0.01031, saving model to LSTM5.h5
45/45 - 0s - loss: 0.0246 - val_loss: 0.0103 - lr: 0.0010 - 487ms/epoch - 11ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0223 - val_loss: 0.1079 - lr: 0.0010 - 361ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0265 - val_loss: 0.0105 - lr: 0.0010 - 357ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0224 - val_loss: 0.0331 - lr: 0.0010 - 347ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0242 - val_loss: 0.0119 - lr: 0.0010 - 386ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00028: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0233 - val_loss: 0.1756 - lr: 0.0010 - 376ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0292 - val_loss: 0.1420 - lr: 1.0000e-04 - 377ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0205 - val_loss: 0.1177 - lr: 1.0000e-04 - 388ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0219 - val_loss: 0.0977 - lr: 1.0000e-04 - 447ms/epoch - 10ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0193 - val_loss: 0.0782 - lr: 1.0000e-04 - 386ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00033: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0192 - val_loss: 0.0651 - lr: 1.0000e-04 - 383ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0191 - val_loss: 0.0642 - lr: 1.0000e-05 - 374ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0196 - val_loss: 0.0634 - lr: 1.0000e-05 - 422ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0191 - val_loss: 0.0627 - lr: 1.0000e-05 - 415ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0186 - val_loss: 0.0618 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00038: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0218 - val_loss: 0.0608 - lr: 1.0000e-05 - 360ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0191 - val_loss: 0.0598 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0191 - val_loss: 0.0590 - lr: 1.0000e-05 - 366ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0180 - val_loss: 0.0581 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0180 - val_loss: 0.0576 - lr: 1.0000e-05 - 357ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0196 - val_loss: 0.0563 - lr: 1.0000e-05 - 386ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0192 - val_loss: 0.0555 - lr: 1.0000e-05 - 409ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0195 - val_loss: 0.0549 - lr: 1.0000e-05 - 354ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0193 - val_loss: 0.0538 - lr: 1.0000e-05 - 378ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0186 - val_loss: 0.0530 - lr: 1.0000e-05 - 379ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0185 - val_loss: 0.0524 - lr: 1.0000e-05 - 407ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0174 - val_loss: 0.0518 - lr: 1.0000e-05 - 394ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0199 - val_loss: 0.0510 - lr: 1.0000e-05 - 405ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0185 - val_loss: 0.0505 - lr: 1.0000e-05 - 403ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0204 - val_loss: 0.0495 - lr: 1.0000e-05 - 394ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0184 - val_loss: 0.0491 - lr: 1.0000e-05 - 371ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0198 - val_loss: 0.0492 - lr: 1.0000e-05 - 373ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0187 - val_loss: 0.0480 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0191 - val_loss: 0.0471 - lr: 1.0000e-05 - 395ms/epoch - 9ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0175 - val_loss: 0.0460 - lr: 1.0000e-05 - 395ms/epoch - 9ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0165 - val_loss: 0.0452 - lr: 1.0000e-05 - 415ms/epoch - 9ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0162 - val_loss: 0.0443 - lr: 1.0000e-05 - 416ms/epoch - 9ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0174 - val_loss: 0.0435 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0176 - val_loss: 0.0428 - lr: 1.0000e-05 - 409ms/epoch - 9ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0178 - val_loss: 0.0423 - lr: 1.0000e-05 - 376ms/epoch - 8ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0187 - val_loss: 0.0412 - lr: 1.0000e-05 - 387ms/epoch - 9ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0179 - val_loss: 0.0408 - lr: 1.0000e-05 - 399ms/epoch - 9ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0180 - val_loss: 0.0405 - lr: 1.0000e-05 - 354ms/epoch - 8ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0176 - val_loss: 0.0395 - lr: 1.0000e-05 - 360ms/epoch - 8ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0177 - val_loss: 0.0394 - lr: 1.0000e-05 - 346ms/epoch - 8ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0183 - val_loss: 0.0387 - lr: 1.0000e-05 - 381ms/epoch - 8ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0171 - val_loss: 0.0381 - lr: 1.0000e-05 - 397ms/epoch - 9ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0180 - val_loss: 0.0370 - lr: 1.0000e-05 - 417ms/epoch - 9ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0200 - val_loss: 0.0374 - lr: 1.0000e-05 - 364ms/epoch - 8ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0188 - val_loss: 0.0366 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.01031
45/45 - 0s - loss: 0.0202 - val_loss: 0.0367 - lr: 1.0000e-05 - 348ms/epoch - 8ms/step
Epoch 00073: early stopping
SMA
Prediction vs Close:		46.27% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 22.905457367129987 
RMSE:	 4.785964622427749 
MAPE:	 4.003866138329542

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.03% Accuracy
MSE:	 24.329194491765858 
RMSE:	 4.932463328983385 
MAPE:	 4.099346553838579

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 26.731339639305343 
RMSE:	 5.170235936522176 
MAPE:	 4.142288801040536

DEMA
Prediction vs Close:		48.13% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 20.555600166870196 
RMSE:	 4.53382842274277 
MAPE:	 3.6522177332314283

KAMA
Prediction vs Close:		57.84% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 36.658985139129086 
RMSE:	 6.054666393710646 
MAPE:	 4.91375972579294
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.768, Time=3.28 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.591, Time=4.56 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15581.065, Time=8.79 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.591, Time=7.49 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16536.628, Time=9.58 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-13971.493, Time=10.43 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17226.044, Time=21.15 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.591, Time=9.27 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16754.945, Time=19.51 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-15001.855, Time=21.51 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 115.600 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8640.022
Date:                Sun, 12 Dec 2021   AIC                         -17226.044
Time:                        15:41:32   BIC                         -17099.391
Sample:                             0   HQIC                        -17177.404
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.031e-09   1.06e-05     -0.000      1.000   -2.08e-05    2.08e-05
x2          -4.99e-09   8.12e-06     -0.001      1.000   -1.59e-05    1.59e-05
x3         -5.114e-09   1.38e-05     -0.000      1.000    -2.7e-05     2.7e-05
x4             1.0000   8.91e-06   1.12e+05      0.000       1.000       1.000
x5          -4.55e-09    8.2e-06     -0.001      1.000   -1.61e-05    1.61e-05
x6         -9.992e-08      0.001     -0.000      1.000      -0.002       0.002
x7         -4.607e-09   1.97e-05     -0.000      1.000   -3.86e-05    3.86e-05
x8         -4.591e-09   1.77e-05     -0.000      1.000   -3.48e-05    3.48e-05
x9         -2.538e-09   1.13e-05     -0.000      1.000   -2.21e-05    2.21e-05
x10        -4.315e-09   6.08e-06     -0.001      0.999   -1.19e-05    1.19e-05
x11        -4.545e-09   1.62e-05     -0.000      1.000   -3.18e-05    3.18e-05
x12        -4.701e-09   1.97e-05     -0.000      1.000   -3.87e-05    3.87e-05
x13        -4.823e-09   1.18e-05     -0.000      1.000    -2.3e-05     2.3e-05
x14         -4.08e-08   4.99e-05     -0.001      0.999   -9.79e-05    9.78e-05
x15        -5.557e-09   2.03e-05     -0.000      1.000   -3.99e-05    3.99e-05
x16        -3.541e-09    1.3e-05     -0.000      1.000   -2.55e-05    2.55e-05
x17        -3.463e-09   1.51e-05     -0.000      1.000   -2.97e-05    2.97e-05
x18        -1.534e-08      4e-05     -0.000      1.000   -7.85e-05    7.85e-05
x19        -6.118e-09   2.07e-05     -0.000      1.000   -4.05e-05    4.05e-05
x20        -1.581e-08   3.38e-05     -0.000      1.000   -6.62e-05    6.61e-05
x21        -5.505e-08    5.6e-05     -0.001      0.999      -0.000       0.000
x22        -2.936e-08   4.55e-05     -0.001      0.999   -8.92e-05    8.92e-05
x23        -3.882e-08   4.89e-05     -0.001      0.999   -9.58e-05    9.57e-05
x24        -2.099e-08   4.87e-05     -0.000      1.000   -9.54e-05    9.54e-05
ma.L1         -1.3900   1.23e-07  -1.13e+07      0.000      -1.390      -1.390
ma.L2          0.4044   1.43e-07   2.82e+06      0.000       0.404       0.404
sigma2      7.525e-11   7.22e-11      1.042      0.297   -6.63e-11    2.17e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.84   Jarque-Bera (JB):           1335305.59
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.09   Skew:                             5.74
Prob(H) (two-sided):                  0.00   Kurtosis:                       202.19
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.77e+23. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

WARNING:tensorflow:Layer lstm_54 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_54 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.10748, saving model to LSTM5.h5
58/58 - 2s - loss: 0.1689 - val_loss: 0.1075 - lr: 0.0010 - 2s/epoch - 37ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.10748
58/58 - 0s - loss: 0.1680 - val_loss: 0.1948 - lr: 0.0010 - 499ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.10748
58/58 - 0s - loss: 0.0806 - val_loss: 0.2118 - lr: 0.0010 - 480ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.10748 to 0.02087, saving model to LSTM5.h5
58/58 - 0s - loss: 0.0585 - val_loss: 0.0209 - lr: 0.0010 - 491ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02087
58/58 - 0s - loss: 0.0409 - val_loss: 0.1412 - lr: 0.0010 - 480ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.02087 to 0.00898, saving model to LSTM5.h5
58/58 - 1s - loss: 0.0370 - val_loss: 0.0090 - lr: 0.0010 - 515ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0341 - val_loss: 0.0605 - lr: 0.0010 - 489ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0395 - val_loss: 0.0121 - lr: 0.0010 - 491ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0363 - val_loss: 0.1334 - lr: 0.0010 - 478ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00898
58/58 - 1s - loss: 0.0293 - val_loss: 0.0147 - lr: 0.0010 - 538ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0259 - val_loss: 0.0226 - lr: 0.0010 - 474ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0300 - val_loss: 0.0235 - lr: 1.0000e-04 - 499ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0270 - val_loss: 0.0340 - lr: 1.0000e-04 - 482ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0257 - val_loss: 0.0341 - lr: 1.0000e-04 - 466ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00898
58/58 - 1s - loss: 0.0250 - val_loss: 0.0326 - lr: 1.0000e-04 - 500ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0234 - val_loss: 0.0340 - lr: 1.0000e-04 - 475ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00898
58/58 - 1s - loss: 0.0251 - val_loss: 0.0342 - lr: 1.0000e-05 - 508ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0253 - val_loss: 0.0340 - lr: 1.0000e-05 - 470ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0248 - val_loss: 0.0336 - lr: 1.0000e-05 - 485ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0236 - val_loss: 0.0334 - lr: 1.0000e-05 - 475ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.00898
58/58 - 1s - loss: 0.0269 - val_loss: 0.0328 - lr: 1.0000e-05 - 508ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0228 - val_loss: 0.0323 - lr: 1.0000e-05 - 483ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00898
58/58 - 1s - loss: 0.0249 - val_loss: 0.0321 - lr: 1.0000e-05 - 553ms/epoch - 10ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00898
58/58 - 1s - loss: 0.0230 - val_loss: 0.0314 - lr: 1.0000e-05 - 504ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00898
58/58 - 1s - loss: 0.0241 - val_loss: 0.0317 - lr: 1.0000e-05 - 551ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0241 - val_loss: 0.0319 - lr: 1.0000e-05 - 486ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0215 - val_loss: 0.0318 - lr: 1.0000e-05 - 488ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00898
58/58 - 1s - loss: 0.0245 - val_loss: 0.0321 - lr: 1.0000e-05 - 509ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0258 - val_loss: 0.0326 - lr: 1.0000e-05 - 458ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0252 - val_loss: 0.0324 - lr: 1.0000e-05 - 473ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0237 - val_loss: 0.0323 - lr: 1.0000e-05 - 498ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0245 - val_loss: 0.0316 - lr: 1.0000e-05 - 477ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0229 - val_loss: 0.0312 - lr: 1.0000e-05 - 458ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0234 - val_loss: 0.0306 - lr: 1.0000e-05 - 492ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00898
58/58 - 1s - loss: 0.0240 - val_loss: 0.0302 - lr: 1.0000e-05 - 508ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0223 - val_loss: 0.0297 - lr: 1.0000e-05 - 466ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00898
58/58 - 1s - loss: 0.0262 - val_loss: 0.0299 - lr: 1.0000e-05 - 521ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0258 - val_loss: 0.0293 - lr: 1.0000e-05 - 462ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0211 - val_loss: 0.0285 - lr: 1.0000e-05 - 469ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0240 - val_loss: 0.0282 - lr: 1.0000e-05 - 487ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00898
58/58 - 1s - loss: 0.0240 - val_loss: 0.0273 - lr: 1.0000e-05 - 513ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0247 - val_loss: 0.0267 - lr: 1.0000e-05 - 495ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0217 - val_loss: 0.0269 - lr: 1.0000e-05 - 485ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0232 - val_loss: 0.0270 - lr: 1.0000e-05 - 497ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0220 - val_loss: 0.0272 - lr: 1.0000e-05 - 494ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0238 - val_loss: 0.0279 - lr: 1.0000e-05 - 471ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0239 - val_loss: 0.0276 - lr: 1.0000e-05 - 499ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0232 - val_loss: 0.0266 - lr: 1.0000e-05 - 466ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0230 - val_loss: 0.0257 - lr: 1.0000e-05 - 498ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0224 - val_loss: 0.0248 - lr: 1.0000e-05 - 462ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0226 - val_loss: 0.0254 - lr: 1.0000e-05 - 485ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00898
58/58 - 1s - loss: 0.0212 - val_loss: 0.0269 - lr: 1.0000e-05 - 518ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0217 - val_loss: 0.0270 - lr: 1.0000e-05 - 479ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0233 - val_loss: 0.0266 - lr: 1.0000e-05 - 458ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0240 - val_loss: 0.0280 - lr: 1.0000e-05 - 488ms/epoch - 8ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0221 - val_loss: 0.0271 - lr: 1.0000e-05 - 475ms/epoch - 8ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		46.27% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 22.905457367129987 
RMSE:	 4.785964622427749 
MAPE:	 4.003866138329542

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.03% Accuracy
MSE:	 24.329194491765858 
RMSE:	 4.932463328983385 
MAPE:	 4.099346553838579

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 26.731339639305343 
RMSE:	 5.170235936522176 
MAPE:	 4.142288801040536

DEMA
Prediction vs Close:		48.13% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 20.555600166870196 
RMSE:	 4.53382842274277 
MAPE:	 3.6522177332314283

KAMA
Prediction vs Close:		57.84% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 36.658985139129086 
RMSE:	 6.054666393710646 
MAPE:	 4.91375972579294

MIDPOINT
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 46.211728483578064 
RMSE:	 6.797920894183608 
MAPE:	 5.510818514624332
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17000.569, Time=3.24 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-15576.554, Time=5.69 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16078.305, Time=8.02 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-15574.554, Time=8.99 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16998.627, Time=3.52 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16429.916, Time=12.84 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17000.664, Time=3.56 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-15700.026, Time=11.86 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-15704.282, Time=15.01 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-16998.664, Time=3.14 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 75.889 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8527.332
Date:                Sun, 12 Dec 2021   AIC                         -17000.664
Time:                        15:47:27   BIC                         -16874.011
Sample:                             0   HQIC                        -16952.024
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          8.378e-14   2.16e-06   3.89e-08      1.000   -4.23e-06    4.23e-06
x2          7.457e-14   2.15e-06   3.47e-08      1.000   -4.22e-06    4.22e-06
x3          2.279e-14   2.16e-06   1.05e-08      1.000   -4.24e-06    4.24e-06
x4             1.0000   2.16e-06   4.63e+05      0.000       1.000       1.000
x5          1.211e-12   2.07e-06   5.86e-07      1.000   -4.05e-06    4.05e-06
x6          3.146e-15   2.67e-06   1.18e-09      1.000   -5.23e-06    5.23e-06
x7          1.593e-13   2.15e-06   7.41e-08      1.000   -4.21e-06    4.21e-06
x8            -0.0001    2.1e-06    -48.778      0.000      -0.000   -9.82e-05
x9          5.141e-14   6.35e-07    8.1e-08      1.000   -1.24e-06    1.24e-06
x10        -6.174e-05   1.34e-06    -45.995      0.000   -6.44e-05   -5.91e-05
x11            0.0003   2.15e-06    148.354      0.000       0.000       0.000
x12           -0.0002   2.02e-06    -93.730      0.000      -0.000      -0.000
x13         1.967e-14   2.16e-06    9.1e-09      1.000   -4.23e-06    4.23e-06
x14        -1.297e-14   5.65e-06  -2.29e-09      1.000   -1.11e-05    1.11e-05
x15         -3.18e-12   1.82e-06  -1.75e-06      1.000   -3.57e-06    3.57e-06
x16        -1.426e-12   4.51e-06  -3.16e-07      1.000   -8.84e-06    8.84e-06
x17         7.474e-13   2.37e-06   3.16e-07      1.000   -4.64e-06    4.64e-06
x18         -2.92e-13    2.9e-06  -1.01e-07      1.000   -5.68e-06    5.68e-06
x19        -4.211e-14   1.89e-06  -2.22e-08      1.000   -3.71e-06    3.71e-06
x20        -1.515e-13    1.2e-06  -1.26e-07      1.000   -2.36e-06    2.36e-06
x21         6.555e-13   6.37e-06   1.03e-07      1.000   -1.25e-05    1.25e-05
x22         1.212e-14   6.19e-06   1.96e-09      1.000   -1.21e-05    1.21e-05
x23        -3.877e-13   3.76e-06  -1.03e-07      1.000   -7.38e-06    7.38e-06
x24         8.127e-15   4.01e-06   2.03e-09      1.000   -7.86e-06    7.86e-06
ma.L1         -1.3370   3.84e-12  -3.48e+11      0.000      -1.337      -1.337
ma.L2          0.4289   1.65e-12    2.6e+11      0.000       0.429       0.429
sigma2          1e-10   6.99e-11      1.430      0.153   -3.71e-11    2.37e-10
===================================================================================
Ljung-Box (L1) (Q):                   4.57   Jarque-Bera (JB):           3228712.87
Prob(Q):                              0.03   Prob(JB):                         0.00
Heteroskedasticity (H):               0.12   Skew:                            -9.87
Prob(H) (two-sided):                  0.00   Kurtosis:                       312.63
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.57e+30. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

WARNING:tensorflow:Layer lstm_55 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_55 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.19644, saving model to LSTM5.h5
43/43 - 2s - loss: 0.3641 - val_loss: 0.1964 - lr: 0.0010 - 2s/epoch - 46ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.19644 to 0.02016, saving model to LSTM5.h5
43/43 - 0s - loss: 0.1791 - val_loss: 0.0202 - lr: 0.0010 - 442ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0733 - val_loss: 0.5417 - lr: 0.0010 - 363ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0581 - val_loss: 0.2098 - lr: 0.0010 - 341ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0566 - val_loss: 0.0324 - lr: 0.0010 - 405ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0494 - val_loss: 0.1509 - lr: 0.0010 - 381ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0493 - val_loss: 0.0739 - lr: 0.0010 - 448ms/epoch - 10ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0484 - val_loss: 0.0821 - lr: 1.0000e-04 - 366ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0407 - val_loss: 0.0852 - lr: 1.0000e-04 - 337ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0458 - val_loss: 0.0893 - lr: 1.0000e-04 - 356ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0395 - val_loss: 0.0956 - lr: 1.0000e-04 - 370ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0381 - val_loss: 0.0973 - lr: 1.0000e-04 - 380ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0392 - val_loss: 0.0977 - lr: 1.0000e-05 - 373ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0413 - val_loss: 0.0974 - lr: 1.0000e-05 - 431ms/epoch - 10ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0393 - val_loss: 0.0967 - lr: 1.0000e-05 - 481ms/epoch - 11ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0398 - val_loss: 0.0960 - lr: 1.0000e-05 - 357ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0374 - val_loss: 0.0955 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0380 - val_loss: 0.0952 - lr: 1.0000e-05 - 398ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0404 - val_loss: 0.0959 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0381 - val_loss: 0.0959 - lr: 1.0000e-05 - 395ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0386 - val_loss: 0.0970 - lr: 1.0000e-05 - 383ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0377 - val_loss: 0.0978 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0381 - val_loss: 0.0982 - lr: 1.0000e-05 - 415ms/epoch - 10ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0333 - val_loss: 0.0972 - lr: 1.0000e-05 - 358ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0352 - val_loss: 0.0975 - lr: 1.0000e-05 - 378ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0380 - val_loss: 0.0970 - lr: 1.0000e-05 - 452ms/epoch - 11ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0346 - val_loss: 0.0966 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0381 - val_loss: 0.0965 - lr: 1.0000e-05 - 405ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0395 - val_loss: 0.0966 - lr: 1.0000e-05 - 349ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0382 - val_loss: 0.0969 - lr: 1.0000e-05 - 383ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0382 - val_loss: 0.0979 - lr: 1.0000e-05 - 369ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0385 - val_loss: 0.0989 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0352 - val_loss: 0.0987 - lr: 1.0000e-05 - 358ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0395 - val_loss: 0.0993 - lr: 1.0000e-05 - 399ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0366 - val_loss: 0.0987 - lr: 1.0000e-05 - 394ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0384 - val_loss: 0.0977 - lr: 1.0000e-05 - 420ms/epoch - 10ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0329 - val_loss: 0.0974 - lr: 1.0000e-05 - 349ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0354 - val_loss: 0.0981 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0354 - val_loss: 0.0972 - lr: 1.0000e-05 - 366ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0340 - val_loss: 0.0984 - lr: 1.0000e-05 - 396ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0352 - val_loss: 0.0997 - lr: 1.0000e-05 - 384ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0378 - val_loss: 0.1001 - lr: 1.0000e-05 - 371ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0376 - val_loss: 0.1000 - lr: 1.0000e-05 - 355ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0330 - val_loss: 0.0996 - lr: 1.0000e-05 - 359ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0361 - val_loss: 0.0995 - lr: 1.0000e-05 - 351ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0348 - val_loss: 0.0988 - lr: 1.0000e-05 - 380ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0348 - val_loss: 0.0984 - lr: 1.0000e-05 - 428ms/epoch - 10ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0373 - val_loss: 0.0986 - lr: 1.0000e-05 - 468ms/epoch - 11ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0378 - val_loss: 0.0989 - lr: 1.0000e-05 - 392ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0378 - val_loss: 0.1000 - lr: 1.0000e-05 - 337ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0360 - val_loss: 0.1007 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.02016
43/43 - 0s - loss: 0.0372 - val_loss: 0.1004 - lr: 1.0000e-05 - 405ms/epoch - 9ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		46.27% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 22.905457367129987 
RMSE:	 4.785964622427749 
MAPE:	 4.003866138329542

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.03% Accuracy
MSE:	 24.329194491765858 
RMSE:	 4.932463328983385 
MAPE:	 4.099346553838579

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 26.731339639305343 
RMSE:	 5.170235936522176 
MAPE:	 4.142288801040536

DEMA
Prediction vs Close:		48.13% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 20.555600166870196 
RMSE:	 4.53382842274277 
MAPE:	 3.6522177332314283

KAMA
Prediction vs Close:		57.84% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 36.658985139129086 
RMSE:	 6.054666393710646 
MAPE:	 4.91375972579294

MIDPOINT
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 46.211728483578064 
RMSE:	 6.797920894183608 
MAPE:	 5.510818514624332

T3
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.422414094648474 
RMSE:	 6.278727107833918 
MAPE:	 5.177910469752962
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16762.799, Time=5.15 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14158.507, Time=2.90 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16445.598, Time=9.16 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-16144.282, Time=11.57 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15275.101, Time=8.94 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15897.090, Time=14.43 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16446.973, Time=9.17 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16567.628, Time=3.71 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16523.926, Time=4.41 sec
 ARIMA(1,3,1)(0,0,0)[0] intercept   : AIC=-16696.008, Time=3.28 sec

Best model:  ARIMA(1,3,1)(0,0,0)[0]          
Total fit time: 72.746 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 1)   Log Likelihood                8408.400
Date:                Sun, 12 Dec 2021   AIC                         -16762.799
Time:                        15:53:36   BIC                         -16636.147
Sample:                             0   HQIC                        -16714.159
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.289e-07      0.001     -0.000      1.000      -0.002       0.002
x2         -5.288e-07      0.001     -0.001      0.999      -0.002       0.002
x3         -5.306e-07      0.001     -0.000      1.000      -0.002       0.002
x4             1.0000      0.000   2045.695      0.000       0.999       1.001
x5         -5.041e-07      0.000     -0.001      0.999      -0.001       0.001
x6         -9.879e-07   4.33e-05     -0.023      0.982   -8.58e-05    8.38e-05
x7         -5.185e-07      0.001     -0.001      0.999      -0.001       0.001
x8             0.0001      0.000      0.643      0.520      -0.000       0.001
x9          9.794e-08      0.001      0.000      1.000      -0.001       0.001
x10            0.0001      0.000      0.313      0.754      -0.001       0.001
x11           -0.0004      0.000     -2.284      0.022      -0.001   -6.06e-05
x12            0.0005      0.000      2.453      0.014       0.000       0.001
x13        -5.277e-07      0.000     -0.002      0.999      -0.001       0.001
x14        -1.566e-06      0.000     -0.005      0.996      -0.001       0.001
x15        -5.136e-07   9.86e-05     -0.005      0.996      -0.000       0.000
x16         -7.66e-07      0.000     -0.002      0.999      -0.001       0.001
x17        -5.146e-07      0.000     -0.003      0.998      -0.000       0.000
x18        -1.701e-07      0.001     -0.000      1.000      -0.001       0.001
x19         -5.77e-07   8.54e-05     -0.007      0.995      -0.000       0.000
x20         5.026e-07      0.001      0.001      0.999      -0.001       0.001
x21        -2.058e-06      0.000     -0.010      0.992      -0.000       0.000
x22        -1.098e-06      0.001     -0.001      0.999      -0.003       0.003
x23        -1.472e-06      0.001     -0.003      0.998      -0.001       0.001
x24        -8.255e-07      0.001     -0.001      0.999      -0.002       0.002
ar.L1         -0.2866   3.63e-05  -7897.273      0.000      -0.287      -0.287
ma.L1         -0.9124   1.46e-06  -6.25e+05      0.000      -0.912      -0.912
sigma2       9.98e-11   7.23e-11      1.380      0.168    -4.2e-11    2.42e-10
===================================================================================
Ljung-Box (L1) (Q):                  83.51   Jarque-Bera (JB):           4742889.91
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            -5.71
Prob(H) (two-sided):                  0.00   Kurtosis:                       378.86
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.2e+22. Standard errors may be unstable.
ARIMA order: (1, 3, 1) 

WARNING:tensorflow:Layer lstm_56 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_56 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.08859, saving model to LSTM5.h5
90/90 - 2s - loss: 0.1608 - val_loss: 0.0886 - lr: 0.0010 - 2s/epoch - 27ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.08859
90/90 - 1s - loss: 0.0697 - val_loss: 0.1741 - lr: 0.0010 - 742ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.08859 to 0.03433, saving model to LSTM5.h5
90/90 - 1s - loss: 0.0543 - val_loss: 0.0343 - lr: 0.0010 - 787ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.03433 to 0.01881, saving model to LSTM5.h5
90/90 - 1s - loss: 0.0596 - val_loss: 0.0188 - lr: 0.0010 - 739ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.01881 to 0.01390, saving model to LSTM5.h5
90/90 - 1s - loss: 0.0722 - val_loss: 0.0139 - lr: 0.0010 - 744ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0503 - val_loss: 0.4205 - lr: 0.0010 - 706ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0424 - val_loss: 0.0959 - lr: 0.0010 - 699ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0489 - val_loss: 0.5773 - lr: 0.0010 - 695ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0383 - val_loss: 0.3410 - lr: 0.0010 - 745ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0328 - val_loss: 0.1177 - lr: 0.0010 - 783ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0284 - val_loss: 0.1371 - lr: 1.0000e-04 - 715ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0272 - val_loss: 0.1484 - lr: 1.0000e-04 - 680ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0274 - val_loss: 0.1456 - lr: 1.0000e-04 - 704ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0284 - val_loss: 0.1484 - lr: 1.0000e-04 - 697ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0277 - val_loss: 0.1554 - lr: 1.0000e-04 - 708ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0245 - val_loss: 0.1553 - lr: 1.0000e-05 - 695ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0270 - val_loss: 0.1547 - lr: 1.0000e-05 - 714ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0238 - val_loss: 0.1542 - lr: 1.0000e-05 - 695ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0251 - val_loss: 0.1534 - lr: 1.0000e-05 - 888ms/epoch - 10ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0260 - val_loss: 0.1523 - lr: 1.0000e-05 - 739ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0266 - val_loss: 0.1516 - lr: 1.0000e-05 - 726ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0263 - val_loss: 0.1502 - lr: 1.0000e-05 - 713ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0259 - val_loss: 0.1486 - lr: 1.0000e-05 - 676ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0236 - val_loss: 0.1482 - lr: 1.0000e-05 - 714ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0226 - val_loss: 0.1466 - lr: 1.0000e-05 - 725ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0235 - val_loss: 0.1464 - lr: 1.0000e-05 - 789ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0241 - val_loss: 0.1462 - lr: 1.0000e-05 - 704ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0256 - val_loss: 0.1468 - lr: 1.0000e-05 - 743ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0259 - val_loss: 0.1477 - lr: 1.0000e-05 - 745ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0242 - val_loss: 0.1481 - lr: 1.0000e-05 - 678ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0235 - val_loss: 0.1468 - lr: 1.0000e-05 - 728ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0263 - val_loss: 0.1465 - lr: 1.0000e-05 - 822ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0244 - val_loss: 0.1467 - lr: 1.0000e-05 - 770ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0253 - val_loss: 0.1454 - lr: 1.0000e-05 - 719ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0228 - val_loss: 0.1446 - lr: 1.0000e-05 - 750ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0229 - val_loss: 0.1412 - lr: 1.0000e-05 - 727ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0221 - val_loss: 0.1392 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0255 - val_loss: 0.1384 - lr: 1.0000e-05 - 700ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0223 - val_loss: 0.1380 - lr: 1.0000e-05 - 699ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0238 - val_loss: 0.1364 - lr: 1.0000e-05 - 714ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0239 - val_loss: 0.1373 - lr: 1.0000e-05 - 691ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0241 - val_loss: 0.1377 - lr: 1.0000e-05 - 696ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0253 - val_loss: 0.1386 - lr: 1.0000e-05 - 730ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0230 - val_loss: 0.1363 - lr: 1.0000e-05 - 702ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0241 - val_loss: 0.1346 - lr: 1.0000e-05 - 702ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0227 - val_loss: 0.1344 - lr: 1.0000e-05 - 718ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0231 - val_loss: 0.1331 - lr: 1.0000e-05 - 797ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0231 - val_loss: 0.1303 - lr: 1.0000e-05 - 763ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0229 - val_loss: 0.1301 - lr: 1.0000e-05 - 718ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0219 - val_loss: 0.1300 - lr: 1.0000e-05 - 791ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0232 - val_loss: 0.1308 - lr: 1.0000e-05 - 730ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0210 - val_loss: 0.1310 - lr: 1.0000e-05 - 710ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0220 - val_loss: 0.1318 - lr: 1.0000e-05 - 692ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0233 - val_loss: 0.1301 - lr: 1.0000e-05 - 706ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01390
90/90 - 1s - loss: 0.0227 - val_loss: 0.1280 - lr: 1.0000e-05 - 708ms/epoch - 8ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		46.27% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 22.905457367129987 
RMSE:	 4.785964622427749 
MAPE:	 4.003866138329542

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.03% Accuracy
MSE:	 24.329194491765858 
RMSE:	 4.932463328983385 
MAPE:	 4.099346553838579

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 26.731339639305343 
RMSE:	 5.170235936522176 
MAPE:	 4.142288801040536

DEMA
Prediction vs Close:		48.13% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 20.555600166870196 
RMSE:	 4.53382842274277 
MAPE:	 3.6522177332314283

KAMA
Prediction vs Close:		57.84% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 36.658985139129086 
RMSE:	 6.054666393710646 
MAPE:	 4.91375972579294

MIDPOINT
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 46.211728483578064 
RMSE:	 6.797920894183608 
MAPE:	 5.510818514624332

T3
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.422414094648474 
RMSE:	 6.278727107833918 
MAPE:	 5.177910469752962

TEMA
Prediction vs Close:		47.76% Accuracy
Prediction vs Prediction:	51.87% Accuracy
MSE:	 27.219221961342864 
RMSE:	 5.217204420122223 
MAPE:	 4.028826161355838
Runtime: mins: 54.95112048581668

Architecture Used

In [117]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
In [118]:
img = cv2.imread('Experiment5.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[118]:
<matplotlib.image.AxesImage at 0x7f75dc11ff50>

Model Plots

In [108]:
with open('simulation5_data.json') as json_file:
    simulation5 = json.load(json_file)
fileimg = 'Experiment5'
In [109]:
for i in range(len(list(simulation5.keys()))):
  SIM = list(simulation5.keys())[i]
  plot_train(simulation5,SIM)
  plot_test(simulation5,SIM)
----- Train RMSE for SMA ----- 8.067881033957713
----- Train_MSE_LSTM for SMA ----- 65.09070437809457
----- Train MAE LSTM for SMA ----- 7.037769857791742
----- Test RMSE for SMA----- 4.785964622427749
----- Test_MSE_LSTM for SMA----- 22.905457367129987
----- Test_MAE_LSTM for SMA----- 4.003866138329542
----- Train RMSE for EMA ----- 9.103310494431582
----- Train_MSE_LSTM for EMA ----- 82.8702619580282
----- Train MAE LSTM for EMA ----- 7.942187929793381
----- Test RMSE for EMA----- 4.932463328983385
----- Test_MSE_LSTM for EMA----- 24.329194491765858
----- Test_MAE_LSTM for EMA----- 4.099346553838579
----- Train RMSE for WMA ----- 9.385901722462174
----- Train_MSE_LSTM for WMA ----- 88.09515114371841
----- Train MAE LSTM for WMA ----- 8.302396097763367
----- Test RMSE for WMA----- 5.170235936522176
----- Test_MSE_LSTM for WMA----- 26.731339639305343
----- Test_MAE_LSTM for WMA----- 4.142288801040536
----- Train RMSE for DEMA ----- 11.174629479833136
----- Train_MSE_LSTM for DEMA ----- 124.8723440115558
----- Train MAE LSTM for DEMA ----- 9.963720709179979
----- Test RMSE for DEMA----- 4.53382842274277
----- Test_MSE_LSTM for DEMA----- 20.555600166870196
----- Test_MAE_LSTM for DEMA----- 3.6522177332314283
----- Train RMSE for KAMA ----- 9.612463288091494
----- Train_MSE_LSTM for KAMA ----- 92.39945046490672
----- Train MAE LSTM for KAMA ----- 8.617087029614533
----- Test RMSE for KAMA----- 6.054666393710646
----- Test_MSE_LSTM for KAMA----- 36.658985139129086
----- Test_MAE_LSTM for KAMA----- 4.91375972579294
----- Train RMSE for MIDPOINT ----- 8.63124538592405
----- Train_MSE_LSTM for MIDPOINT ----- 74.49839691203519
----- Train MAE LSTM for MIDPOINT ----- 7.641012681592809
----- Test RMSE for MIDPOINT----- 6.797920894183608
----- Test_MSE_LSTM for MIDPOINT----- 46.211728483578064
----- Test_MAE_LSTM for MIDPOINT----- 5.510818514624332
----- Train RMSE for T3 ----- 10.712175422308457
----- Train_MSE_LSTM for T3 ----- 114.75070227830938
----- Train MAE LSTM for T3 ----- 9.554836890139239
----- Test RMSE for T3----- 6.278727107833918
----- Test_MSE_LSTM for T3----- 39.422414094648474
----- Test_MAE_LSTM for T3----- 5.177910469752962
----- Train RMSE for TEMA ----- 6.84563619096288
----- Train_MSE_LSTM for TEMA ----- 46.86273485902077
----- Train MAE LSTM for TEMA ----- 4.587888858994552
----- Test RMSE for TEMA----- 5.217204420122223
----- Test_MSE_LSTM for TEMA----- 27.219221961342864
----- Test_MAE_LSTM for TEMA----- 4.028826161355838

Arima w Exogenous Variable Multistep MutiVariate LSTM Hybrid Model Experiment 6

In [120]:
def get_arima_exog(dataframe,original_data, train_len, test_len):    
    

    # prepare train and test data for exogenous vr
    X_value = pd.DataFrame(low_vol.iloc[:, :])
    y_value = pd.DataFrame(low_vol.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    X_scale_dataset = X_scaler.fit_transform(X_value)
    y_scale_dataset = y_scaler.fit_transform(y_value)
    # Get data and check shape
    # X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X_scale_dataset)
    y_train, y_test, = split_train_test(y_scale_dataset)
    yc_train,yc_test = split_train_test(low_vol_data)
    yc = yc_test.values.tolist()
    y_train_list = y_train.flatten().tolist()
    y_test_list = y_test.flatten().tolist()
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)

    # Initialize model
    model = auto_arima(y_train_list,exogenous  = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
            suppress_warnings=True,stepwise=True,seasonal=True)

      # Determine model parameters
    print(model.summary())
    model.fit(y_train_list,maxiter=200)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

      # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
        model = pmdarima.ARIMA(order=order)
        model.fit(y_train_list)
        # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')

        prediction.append(model.predict()[0])
        y_train_list.append(y_test_list[i])

    predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
    y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))

    # Generate error data
    mse = mean_squared_error(yc_test, predictionte)
    rmse = mse ** 0.5
    mae = mean_absolute_error(y_test_ , predictionte )
    return yc,predictionte.flatten().tolist(), mse, rmse, mae
In [122]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det =20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()


    # # option 2
    model = Sequential()
    model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    model.add(Dense(64))
    model.add(Dense(units=output_dim))
    model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM6.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()

    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [123]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation6 = {}
    imgfile = 'Experiment6'
    for ma in optimized_period:
                print(ma)
                print(functions[ma])
                print ( int( optimized_period[ma]))
              # if ma == 'SMA':
                low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
                low_vol = low_vol.fillna(0)
                low_vol_data = df['close']
                high_vol = pd.DataFrame()
                df2 = df.copy()
                for i in df2.columns:
                  if i in low_vol.columns:
                    high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
                high_vol_data = df['close']
                ## *****************************************************
                # Generate ARIMA and LSTM predictions
                print('\nWorking on ' + ma + ' predictions')
                try:
                  print('parameters used : ', train_len, test_len)
                  low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
                except:
                    print('ARIMA error, skipping to next MA type')
                    continue
                Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
                final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
                mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
                rmse_ftr = mse_ftr ** 0.5
                mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
                mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

                final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
                mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
                rmse = mse ** 0.5
                mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                # Generate prediction accuracy
                actual = df['close'].tail(test_len).values
                result_1 = []
                result_2 = []
                for i in range(1, len(final_prediction)):
                    # Compare prediction to previous close price
                    if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                        result_1.append(1)
                    elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                        result_1.append(1)
                    else:
                        result_1.append(0)

                    # Compare prediction to previous prediction
                    if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                        result_2.append(1)
                    elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                        result_2.append(1)
                    else:
                        result_2.append(0)

                accuracy_1 = np.mean(result_1)
                accuracy_2 = np.mean(result_2)

                simulation6[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                              'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                  'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                              'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                  'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                              'rmse': rmse_ftr, 'mae' : mae_ftr},
                                  'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                            'rmse': rmse, 'mae': mae },
                                  'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

                # save simulation data here as checkpoint
                with open('simulation6_data.json', 'w') as fp:
                    json.dump(simulation6, fp)

                for ma in simulation6.keys():
                    print('\n' + ma)
                    print('Prediction vs Close:\t\t' + str(round(100*simulation6[ma]['accuracy']['prediction vs close'], 2))
                          + '% Accuracy')
                    print('Prediction vs Prediction:\t' + str(round(100*simulation6[ma]['accuracy']['prediction vs prediction'], 2))
                          + '% Accuracy')
                    print('MSE:\t', simulation6[ma]['final']['mse'],
                          '\nRMSE:\t', simulation6[ma]['final']['rmse'],
                          '\nMAPE:\t', simulation6[ma]['final']['mae'])#,
                          # '\nMAPE:\t', simulation[ma]['final']['mape'])
              # else:
              #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.786, Time=3.31 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.592, Time=4.55 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15578.394, Time=8.68 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.592, Time=7.49 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16966.361, Time=9.62 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16121.635, Time=10.86 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17214.069, Time=13.36 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.592, Time=9.23 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-14572.319, Time=10.05 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-14403.474, Time=42.41 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 119.573 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8634.035
Date:                Sun, 12 Dec 2021   AIC                         -17214.069
Time:                        16:07:00   BIC                         -17087.416
Sample:                             0   HQIC                        -17165.429
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -4.257e-09   9.55e-06     -0.000      1.000   -1.87e-05    1.87e-05
x2         -4.256e-09   9.56e-06     -0.000      1.000   -1.87e-05    1.87e-05
x3         -4.313e-09   9.62e-06     -0.000      1.000   -1.89e-05    1.88e-05
x4             1.0000   9.61e-06   1.04e+05      0.000       1.000       1.000
x5         -3.891e-09   9.14e-06     -0.000      1.000   -1.79e-05    1.79e-05
x6         -1.122e-08   1.03e-05     -0.001      0.999   -2.03e-05    2.03e-05
x7         -4.223e-09   9.54e-06     -0.000      1.000   -1.87e-05    1.87e-05
x8         -4.234e-09   9.55e-06     -0.000      1.000   -1.87e-05    1.87e-05
x9         -1.626e-10   6.54e-07     -0.000      1.000   -1.28e-06    1.28e-06
x10        -6.831e-10   2.91e-06     -0.000      1.000    -5.7e-06     5.7e-06
x11        -4.115e-09   9.41e-06     -0.000      1.000   -1.84e-05    1.84e-05
x12        -4.303e-09   9.62e-06     -0.000      1.000   -1.89e-05    1.88e-05
x13        -4.288e-09    9.6e-06     -0.000      1.000   -1.88e-05    1.88e-05
x14        -3.749e-08   2.81e-05     -0.001      0.999   -5.51e-05     5.5e-05
x15        -5.032e-09   1.04e-05     -0.000      1.000   -2.04e-05    2.03e-05
x16        -3.685e-09      9e-06     -0.000      1.000   -1.76e-05    1.76e-05
x17        -3.286e-09   8.45e-06     -0.000      1.000   -1.66e-05    1.66e-05
x18         -1.22e-08   1.59e-05     -0.001      0.999   -3.11e-05    3.11e-05
x19        -5.685e-09    1.1e-05     -0.001      1.000   -2.16e-05    2.16e-05
x20         -1.42e-08   1.69e-05     -0.001      0.999   -3.32e-05    3.32e-05
x21        -5.194e-08   3.31e-05     -0.002      0.999   -6.49e-05    6.48e-05
x22        -2.548e-08   2.31e-05     -0.001      0.999   -4.53e-05    4.52e-05
x23        -3.534e-08   2.73e-05     -0.001      0.999   -5.35e-05    5.34e-05
x24        -1.566e-08    1.8e-05     -0.001      0.999   -3.53e-05    3.53e-05
ma.L1         -1.3899   4.98e-09  -2.79e+08      0.000      -1.390      -1.390
ma.L2          0.4032   4.98e-09   8.09e+07      0.000       0.403       0.403
sigma2      7.635e-11   6.92e-11      1.103      0.270   -5.93e-11    2.12e-10
===================================================================================
Ljung-Box (L1) (Q):                  68.48   Jarque-Bera (JB):           5579791.06
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            10.12
Prob(H) (two-sided):                  0.00   Kurtosis:                       410.36
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.69e+24. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.00838, saving model to LSTM6.h5
48/48 - 4s - loss: 0.1905 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 81ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.00838
48/48 - 0s - loss: 0.0400 - accuracy: 0.0000e+00 - val_loss: 0.0181 - val_accuracy: 0.0037 - lr: 0.0010 - 245ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.00838
48/48 - 0s - loss: 0.0195 - accuracy: 0.0000e+00 - val_loss: 0.0654 - val_accuracy: 0.0037 - lr: 0.0010 - 260ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.00838 to 0.00497, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0279 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 0.0010 - 314ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.00497 to 0.00441, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0026 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 0.0010 - 309ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 0.0010 - 255ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 0.0010 - 262ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 0.0010 - 267ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 0.0010 - 271ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 0.0010 - 282ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0046 - accuracy: 0.0000e+00 - val_loss: 0.0198 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 271ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0051 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 298ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0099 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 279ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 256ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 269ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 294ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 301ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 250ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 250ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 252ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 294ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00441
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 248ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00441
48/48 - 0s - loss: 9.9537e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00441
48/48 - 0s - loss: 9.8973e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00441
48/48 - 0s - loss: 9.8420e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00441
48/48 - 0s - loss: 9.7879e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00441
48/48 - 0s - loss: 9.7348e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 297ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00441
48/48 - 0s - loss: 9.6828e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 5ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 109.05489356902385 
RMSE:	 10.442935103170173 
MAPE:	 8.732625329283675
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.778, Time=3.00 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.589, Time=4.60 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-14606.447, Time=6.14 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.589, Time=6.87 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15343.613, Time=9.59 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15047.583, Time=13.16 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16858.964, Time=11.54 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17024.022, Time=6.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-16998.618, Time=3.44 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17081.451, Time=7.17 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=17.36 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16997.990, Time=3.81 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-16992.667, Time=4.71 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 97.502 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.726
Date:                Sun, 12 Dec 2021   AIC                         -17081.451
Time:                        16:12:48   BIC                         -16945.417
Sample:                             0   HQIC                        -17029.208
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.316e-10   9.89e-05  -2.34e-06      1.000      -0.000       0.000
x2         -2.309e-10   9.88e-05  -2.34e-06      1.000      -0.000       0.000
x3         -2.325e-10   9.91e-05  -2.35e-06      1.000      -0.000       0.000
x4             1.0000    9.9e-05   1.01e+04      0.000       1.000       1.000
x5         -2.108e-10   9.43e-05  -2.24e-06      1.000      -0.000       0.000
x6         -7.997e-10      0.000  -4.63e-06      1.000      -0.000       0.000
x7         -2.295e-10   9.85e-05  -2.33e-06      1.000      -0.000       0.000
x8         -2.244e-10   9.74e-05   -2.3e-06      1.000      -0.000       0.000
x9         -1.166e-11   1.98e-05   -5.9e-07      1.000   -3.87e-05    3.87e-05
x10        -4.454e-11   4.19e-05  -1.06e-06      1.000   -8.22e-05    8.22e-05
x11        -2.219e-10   9.68e-05  -2.29e-06      1.000      -0.000       0.000
x12        -2.264e-10    9.8e-05  -2.31e-06      1.000      -0.000       0.000
x13        -2.315e-10   9.89e-05  -2.34e-06      1.000      -0.000       0.000
x14        -1.767e-09      0.000  -6.47e-06      1.000      -0.001       0.001
x15        -2.096e-10   9.38e-05  -2.23e-06      1.000      -0.000       0.000
x16        -5.257e-10      0.000   -3.5e-06      1.000      -0.000       0.000
x17        -2.143e-10   9.53e-05  -2.25e-06      1.000      -0.000       0.000
x18        -3.776e-11   3.61e-05  -1.05e-06      1.000   -7.08e-05    7.08e-05
x19         -2.52e-10      0.000  -2.41e-06      1.000      -0.000       0.000
x20        -2.417e-10   9.51e-05  -2.54e-06      1.000      -0.000       0.000
x21         -3.16e-09      0.000  -8.64e-06      1.000      -0.001       0.001
x22        -2.955e-09      0.000  -8.32e-06      1.000      -0.001       0.001
x23        -1.664e-09      0.000  -6.29e-06      1.000      -0.001       0.001
x24        -1.568e-09      0.000  -6.07e-06      1.000      -0.001       0.001
ar.L1         -0.4923    1.2e-09  -4.09e+08      0.000      -0.492      -0.492
ar.L2         -0.1923      7e-10  -2.75e+08      0.000      -0.192      -0.192
ar.L3         -0.0461   3.32e-10  -1.39e+08      0.000      -0.046      -0.046
ma.L1         -0.7077   2.73e-09  -2.59e+08      0.000      -0.708      -0.708
sigma2       8.99e-11   6.96e-11      1.291      0.197   -4.66e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.51   Jarque-Bera (JB):           4268313.90
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.44
Prob(H) (two-sided):                  0.00   Kurtosis:                       359.56
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.36e+28. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.01247, saving model to LSTM6.h5
16/16 - 4s - loss: 0.1231 - accuracy: 0.0000e+00 - val_loss: 0.0125 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 225ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.01247
16/16 - 0s - loss: 0.0716 - accuracy: 0.0000e+00 - val_loss: 0.0196 - val_accuracy: 0.0037 - lr: 0.0010 - 91ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01247
16/16 - 0s - loss: 0.0081 - accuracy: 0.0000e+00 - val_loss: 0.0408 - val_accuracy: 0.0037 - lr: 0.0010 - 138ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01247 to 0.00713, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0251 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 0.0010 - 130ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00713
16/16 - 0s - loss: 0.0132 - accuracy: 0.0000e+00 - val_loss: 0.0361 - val_accuracy: 0.0037 - lr: 0.0010 - 104ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00713
16/16 - 0s - loss: 0.0307 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 0.0010 - 106ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00713
16/16 - 0s - loss: 0.0109 - accuracy: 0.0000e+00 - val_loss: 0.0363 - val_accuracy: 0.0037 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00713
16/16 - 0s - loss: 0.0310 - accuracy: 0.0000e+00 - val_loss: 0.0097 - val_accuracy: 0.0037 - lr: 0.0010 - 96ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00009: val_loss did not improve from 0.00713
16/16 - 0s - loss: 0.0140 - accuracy: 0.0000e+00 - val_loss: 0.0284 - val_accuracy: 0.0037 - lr: 0.0010 - 116ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00713
16/16 - 0s - loss: 0.0268 - accuracy: 0.0000e+00 - val_loss: 0.0118 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 114ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.00713 to 0.00653, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0044 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 121ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.00653 to 0.00623, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 118ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00623
16/16 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 115ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00623
16/16 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 114ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00623
16/16 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 102ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.00623 to 0.00609, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 121ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.00609 to 0.00596, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 131ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.00596 to 0.00585, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 128ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.00585 to 0.00574, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 126ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.00574 to 0.00564, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 130ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.00564 to 0.00555, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 136ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss improved from 0.00555 to 0.00547, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 134ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.00547 to 0.00540, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 142ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss improved from 0.00540 to 0.00532, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 157ms/epoch - 10ms/step
Epoch 25/500

Epoch 00025: val_loss improved from 0.00532 to 0.00526, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 138ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss improved from 0.00526 to 0.00520, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 135ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss improved from 0.00520 to 0.00514, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 142ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss improved from 0.00514 to 0.00508, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 124ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss improved from 0.00508 to 0.00503, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 126ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss improved from 0.00503 to 0.00498, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 109ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss improved from 0.00498 to 0.00494, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 125ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss improved from 0.00494 to 0.00489, saving model to LSTM6.h5
16/16 - 0s - loss: 9.9457e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 134ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss improved from 0.00489 to 0.00485, saving model to LSTM6.h5
16/16 - 0s - loss: 9.8588e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 126ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss improved from 0.00485 to 0.00482, saving model to LSTM6.h5
16/16 - 0s - loss: 9.7792e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 137ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss improved from 0.00482 to 0.00478, saving model to LSTM6.h5
16/16 - 0s - loss: 9.7058e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 140ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss improved from 0.00478 to 0.00475, saving model to LSTM6.h5
16/16 - 0s - loss: 9.6376e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 118ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss improved from 0.00475 to 0.00472, saving model to LSTM6.h5
16/16 - 0s - loss: 9.5736e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 120ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss improved from 0.00472 to 0.00469, saving model to LSTM6.h5
16/16 - 0s - loss: 9.5131e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 120ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss improved from 0.00469 to 0.00466, saving model to LSTM6.h5
16/16 - 0s - loss: 9.4556e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 135ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss improved from 0.00466 to 0.00464, saving model to LSTM6.h5
16/16 - 0s - loss: 9.4006e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 139ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss improved from 0.00464 to 0.00461, saving model to LSTM6.h5
16/16 - 0s - loss: 9.3477e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 123ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss improved from 0.00461 to 0.00460, saving model to LSTM6.h5
16/16 - 0s - loss: 9.2965e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 136ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss improved from 0.00460 to 0.00458, saving model to LSTM6.h5
16/16 - 0s - loss: 9.2468e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 118ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00044: val_loss improved from 0.00458 to 0.00456, saving model to LSTM6.h5
16/16 - 0s - loss: 9.1985e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 129ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss improved from 0.00456 to 0.00456, saving model to LSTM6.h5
16/16 - 0s - loss: 9.1033e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss improved from 0.00456 to 0.00456, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0981e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss improved from 0.00456 to 0.00456, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0933e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss improved from 0.00456 to 0.00456, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0887e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss improved from 0.00456 to 0.00456, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0841e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss improved from 0.00456 to 0.00456, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0795e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss improved from 0.00456 to 0.00456, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0749e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss improved from 0.00456 to 0.00455, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0702e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00053: val_loss improved from 0.00455 to 0.00455, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0654e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss improved from 0.00455 to 0.00455, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0606e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss improved from 0.00455 to 0.00455, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0557e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 56/500

Epoch 00056: val_loss improved from 0.00455 to 0.00455, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0508e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 57/500

Epoch 00057: val_loss improved from 0.00455 to 0.00455, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0458e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 126ms/epoch - 8ms/step
Epoch 58/500

Epoch 00058: val_loss improved from 0.00455 to 0.00455, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0408e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 179ms/epoch - 11ms/step
Epoch 59/500

Epoch 00059: val_loss improved from 0.00455 to 0.00455, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0357e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 60/500

Epoch 00060: val_loss improved from 0.00455 to 0.00455, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0305e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 61/500

Epoch 00061: val_loss improved from 0.00455 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0254e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 62/500

Epoch 00062: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0201e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 63/500

Epoch 00063: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0148e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 64/500

Epoch 00064: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0095e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step
Epoch 65/500

Epoch 00065: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0042e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 66/500

Epoch 00066: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9987e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 67/500

Epoch 00067: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9933e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 68/500

Epoch 00068: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9878e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 126ms/epoch - 8ms/step
Epoch 69/500

Epoch 00069: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9822e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 70/500

Epoch 00070: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9767e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 71/500

Epoch 00071: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9710e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 72/500

Epoch 00072: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9653e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 73/500

Epoch 00073: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9596e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 136ms/epoch - 9ms/step
Epoch 74/500

Epoch 00074: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9539e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 75/500

Epoch 00075: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9481e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 76/500

Epoch 00076: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9422e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 192ms/epoch - 12ms/step
Epoch 77/500

Epoch 00077: val_loss improved from 0.00454 to 0.00454, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9363e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 78/500

Epoch 00078: val_loss improved from 0.00454 to 0.00453, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9304e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 79/500

Epoch 00079: val_loss improved from 0.00453 to 0.00453, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9244e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 80/500

Epoch 00080: val_loss improved from 0.00453 to 0.00453, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9184e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 81/500

Epoch 00081: val_loss improved from 0.00453 to 0.00453, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9124e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.9063e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.9002e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 84/500

Epoch 00084: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8940e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 85/500

Epoch 00085: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8878e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 86/500

Epoch 00086: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8815e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 87/500

Epoch 00087: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8753e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 88/500

Epoch 00088: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8689e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 89/500

Epoch 00089: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8626e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 90/500

Epoch 00090: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8562e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 91/500

Epoch 00091: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8497e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 92/500

Epoch 00092: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8433e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 93/500

Epoch 00093: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8368e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 94/500

Epoch 00094: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8302e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 95/500

Epoch 00095: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8236e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 96/500

Epoch 00096: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8170e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 97/500

Epoch 00097: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8103e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 98/500

Epoch 00098: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.8036e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 99/500

Epoch 00099: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7969e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 100/500

Epoch 00100: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7902e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 101/500

Epoch 00101: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7834e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 102/500

Epoch 00102: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7765e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 103/500

Epoch 00103: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7697e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 104/500

Epoch 00104: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7628e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 105/500

Epoch 00105: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7558e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 106/500

Epoch 00106: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7488e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 107/500

Epoch 00107: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7418e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 108/500

Epoch 00108: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7348e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 109/500

Epoch 00109: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7277e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 110/500

Epoch 00110: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7206e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 111/500

Epoch 00111: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7135e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 112/500

Epoch 00112: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.7063e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 113/500

Epoch 00113: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6991e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 114/500

Epoch 00114: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6919e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 115/500

Epoch 00115: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6847e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 116/500

Epoch 00116: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6774e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 117/500

Epoch 00117: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6700e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 118/500

Epoch 00118: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6627e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 119/500

Epoch 00119: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6553e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 120/500

Epoch 00120: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6479e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 121/500

Epoch 00121: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6405e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 122/500

Epoch 00122: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6330e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 123/500

Epoch 00123: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6255e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 124/500

Epoch 00124: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6180e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 125/500

Epoch 00125: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6104e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 126/500

Epoch 00126: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.6028e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 127/500

Epoch 00127: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.5952e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 128/500

Epoch 00128: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.5876e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 129/500

Epoch 00129: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.5799e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 130/500

Epoch 00130: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.5722e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 131/500

Epoch 00131: val_loss did not improve from 0.00453
16/16 - 0s - loss: 8.5645e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 00131: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 109.05489356902385 
RMSE:	 10.442935103170173 
MAPE:	 8.732625329283675

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 73.04142380966745 
RMSE:	 8.546427546622475 
MAPE:	 7.099244401385842
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.780, Time=3.11 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.589, Time=4.60 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16789.784, Time=12.13 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.589, Time=7.20 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16919.987, Time=8.97 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-14616.097, Time=11.93 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17225.955, Time=17.03 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.589, Time=8.91 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-15582.364, Time=18.42 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-12043.670, Time=35.94 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 128.246 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8639.977
Date:                Sun, 12 Dec 2021   AIC                         -17225.955
Time:                        16:24:12   BIC                         -17099.302
Sample:                             0   HQIC                        -17177.315
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -4.802e-09   4.51e-06     -0.001      0.999   -8.84e-06    8.83e-06
x2         -4.783e-09    4.5e-06     -0.001      0.999   -8.83e-06    8.82e-06
x3         -4.811e-09   4.51e-06     -0.001      0.999   -8.85e-06    8.84e-06
x4             1.0000   4.51e-06   2.22e+05      0.000       1.000       1.000
x5         -4.353e-09    4.3e-06     -0.001      0.999   -8.43e-06    8.42e-06
x6         -1.569e-08   7.54e-06     -0.002      0.998   -1.48e-05    1.48e-05
x7          -4.75e-09   4.49e-06     -0.001      0.999    -8.8e-06    8.79e-06
x8         -4.628e-09   4.43e-06     -0.001      0.999   -8.69e-06    8.69e-06
x9         -4.733e-10   1.16e-06     -0.000      1.000   -2.27e-06    2.27e-06
x10         -7.88e-10    1.8e-06     -0.000      1.000   -3.52e-06    3.52e-06
x11        -4.609e-09   4.42e-06     -0.001      0.999   -8.68e-06    8.67e-06
x12        -4.607e-09   4.42e-06     -0.001      0.999   -8.68e-06    8.67e-06
x13        -4.792e-09   4.51e-06     -0.001      0.999   -8.84e-06    8.83e-06
x14        -3.777e-08   1.24e-05     -0.003      0.998   -2.44e-05    2.44e-05
x15         -3.99e-09   4.12e-06     -0.001      0.999   -8.08e-06    8.07e-06
x16        -1.309e-08   7.41e-06     -0.002      0.999   -1.45e-05    1.45e-05
x17        -4.789e-09   4.51e-06     -0.001      0.999   -8.85e-06    8.84e-06
x18        -2.665e-10   9.77e-07     -0.000      1.000   -1.92e-06    1.92e-06
x19        -4.919e-09   4.56e-06     -0.001      0.999   -8.94e-06    8.93e-06
x20            -4e-10   9.58e-07     -0.000      1.000   -1.88e-06    1.88e-06
x21        -6.782e-08   1.67e-05     -0.004      0.997   -3.27e-05    3.26e-05
x22         -6.03e-08   1.58e-05     -0.004      0.997   -3.09e-05    3.08e-05
x23        -3.157e-08   1.14e-05     -0.003      0.998   -2.23e-05    2.23e-05
x24        -3.671e-08   1.23e-05     -0.003      0.998   -2.41e-05    2.41e-05
ma.L1         -1.3901   5.58e-10  -2.49e+09      0.000      -1.390      -1.390
ma.L2          0.4033   5.75e-10   7.02e+08      0.000       0.403       0.403
sigma2      7.525e-11   6.92e-11      1.088      0.277   -6.03e-11    2.11e-10
===================================================================================
Ljung-Box (L1) (Q):                  69.18   Jarque-Bera (JB):           6366427.21
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            12.29
Prob(H) (two-sided):                  0.00   Kurtosis:                       437.97
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.29e+25. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04829, saving model to LSTM6.h5
17/17 - 4s - loss: 0.1049 - accuracy: 0.0000e+00 - val_loss: 0.0483 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 229ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04829
17/17 - 0s - loss: 0.0452 - accuracy: 0.0000e+00 - val_loss: 0.1689 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 100ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04829 to 0.03648, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0148 - accuracy: 0.0000e+00 - val_loss: 0.0365 - val_accuracy: 0.0037 - lr: 0.0010 - 146ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.03648 to 0.00656, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0075 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 0.0010 - 147ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00656
17/17 - 0s - loss: 0.0322 - accuracy: 0.0000e+00 - val_loss: 0.0123 - val_accuracy: 0.0037 - lr: 0.0010 - 107ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00656
17/17 - 0s - loss: 0.0128 - accuracy: 0.0000e+00 - val_loss: 0.0463 - val_accuracy: 0.0037 - lr: 0.0010 - 127ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00656
17/17 - 0s - loss: 0.0221 - accuracy: 0.0000e+00 - val_loss: 0.0596 - val_accuracy: 0.0037 - lr: 0.0010 - 130ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00656
17/17 - 0s - loss: 0.0313 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 0.0010 - 104ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.00656 to 0.00571, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0277 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 0.0010 - 138ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00571
17/17 - 0s - loss: 0.0312 - accuracy: 0.0000e+00 - val_loss: 0.0744 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 106ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00571
17/17 - 0s - loss: 0.0280 - accuracy: 0.0000e+00 - val_loss: 0.0155 - val_accuracy: 0.0037 - lr: 0.0010 - 115ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00571
17/17 - 0s - loss: 0.0155 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00571
17/17 - 0s - loss: 0.0167 - accuracy: 0.0000e+00 - val_loss: 0.0214 - val_accuracy: 0.0037 - lr: 0.0010 - 114ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00014: val_loss did not improve from 0.00571
17/17 - 0s - loss: 0.0036 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 0.0010 - 105ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00571
17/17 - 0s - loss: 0.0044 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 106ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00571
17/17 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 109ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00571
17/17 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 115ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.00571 to 0.00554, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 141ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.00554 to 0.00540, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 134ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.00540 to 0.00525, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 140ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.00525 to 0.00510, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 139ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss improved from 0.00510 to 0.00498, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 127ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.00498 to 0.00489, saving model to LSTM6.h5
17/17 - 0s - loss: 9.9787e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 127ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss improved from 0.00489 to 0.00482, saving model to LSTM6.h5
17/17 - 0s - loss: 9.8252e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 122ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss improved from 0.00482 to 0.00477, saving model to LSTM6.h5
17/17 - 0s - loss: 9.7008e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 145ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss improved from 0.00477 to 0.00473, saving model to LSTM6.h5
17/17 - 0s - loss: 9.5968e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 129ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss improved from 0.00473 to 0.00470, saving model to LSTM6.h5
17/17 - 0s - loss: 9.5080e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 136ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss improved from 0.00470 to 0.00467, saving model to LSTM6.h5
17/17 - 0s - loss: 9.4306e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 134ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss improved from 0.00467 to 0.00466, saving model to LSTM6.h5
17/17 - 0s - loss: 9.3621e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 128ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss improved from 0.00466 to 0.00465, saving model to LSTM6.h5
17/17 - 0s - loss: 9.3003e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 136ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss improved from 0.00465 to 0.00465, saving model to LSTM6.h5
17/17 - 0s - loss: 9.2438e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 139ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00032: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.1913e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 122ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0969e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0925e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0880e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 123ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0836e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00037: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0790e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0744e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0697e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0650e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0602e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0553e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0504e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0454e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0404e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0354e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0302e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0251e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0198e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0146e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0093e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00465
17/17 - 0s - loss: 9.0039e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9985e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9931e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9876e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 122ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9821e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9766e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 165ms/epoch - 10ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9710e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9654e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 127ms/epoch - 7ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9597e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9540e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9483e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9425e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9367e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9309e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9250e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9191e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 122ms/epoch - 7ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9132e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9072e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.9012e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.8952e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.8892e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.8831e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.8770e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.8709e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.8647e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 122ms/epoch - 7ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.8585e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.8523e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.8461e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.8398e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.00465
17/17 - 0s - loss: 8.8335e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 00081: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 109.05489356902385 
RMSE:	 10.442935103170173 
MAPE:	 8.732625329283675

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 73.04142380966745 
RMSE:	 8.546427546622475 
MAPE:	 7.099244401385842

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 84.94978866341171 
RMSE:	 9.21682096296829 
MAPE:	 7.490547440692417
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.785, Time=3.12 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.588, Time=4.69 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15575.689, Time=9.23 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.588, Time=7.27 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16714.796, Time=9.13 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15610.140, Time=10.56 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17225.835, Time=22.70 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.588, Time=9.42 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16751.951, Time=21.31 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-11788.089, Time=31.05 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 128.505 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8639.917
Date:                Sun, 12 Dec 2021   AIC                         -17225.835
Time:                        16:30:59   BIC                         -17099.182
Sample:                             0   HQIC                        -17177.195
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.894e-09   3.61e-06     -0.002      0.999   -7.09e-06    7.08e-06
x2          -5.93e-09   3.63e-06     -0.002      0.999   -7.11e-06     7.1e-06
x3         -5.905e-09   3.62e-06     -0.002      0.999    -7.1e-06    7.09e-06
x4             1.0000   3.62e-06   2.76e+05      0.000       1.000       1.000
x5         -5.457e-09   3.48e-06     -0.002      0.999   -6.83e-06    6.82e-06
x6         -3.019e-08   7.72e-06     -0.004      0.997   -1.52e-05    1.51e-05
x7          -5.87e-09   3.61e-06     -0.002      0.999   -7.08e-06    7.07e-06
x8         -5.809e-09   3.59e-06     -0.002      0.999   -7.05e-06    7.04e-06
x9         -9.293e-11   9.83e-08     -0.001      0.999   -1.93e-07    1.93e-07
x10        -2.793e-09   2.47e-06     -0.001      0.999   -4.84e-06    4.84e-06
x11        -6.095e-09   3.68e-06     -0.002      0.999   -7.21e-06     7.2e-06
x12        -5.478e-09   3.49e-06     -0.002      0.999   -6.85e-06    6.84e-06
x13         -5.91e-09   3.62e-06     -0.002      0.999    -7.1e-06    7.09e-06
x14        -4.085e-08   9.35e-06     -0.004      0.997   -1.84e-05    1.83e-05
x15         -5.93e-09   3.63e-06     -0.002      0.999   -7.12e-06    7.11e-06
x16        -1.618e-09   1.92e-06     -0.001      0.999   -3.76e-06    3.75e-06
x17        -5.076e-09   3.37e-06     -0.002      0.999    -6.6e-06    6.59e-06
x18        -1.377e-08    5.5e-06     -0.003      0.998   -1.08e-05    1.08e-05
x19        -6.135e-09   3.69e-06     -0.002      0.999   -7.23e-06    7.22e-06
x20        -1.018e-08   4.43e-06     -0.002      0.998   -8.68e-06    8.66e-06
x21        -6.911e-08   1.21e-05     -0.006      0.995   -2.39e-05    2.37e-05
x22        -5.656e-08    1.1e-05     -0.005      0.996   -2.16e-05    2.15e-05
x23        -5.355e-08   1.07e-05     -0.005      0.996    -2.1e-05    2.09e-05
x24        -3.636e-08   8.85e-06     -0.004      0.997   -1.74e-05    1.73e-05
ma.L1         -1.3899   4.86e-11  -2.86e+10      0.000      -1.390      -1.390
ma.L2          0.4032    4.6e-11   8.76e+09      0.000       0.403       0.403
sigma2      7.526e-11   6.92e-11      1.088      0.277   -6.03e-11    2.11e-10
===================================================================================
Ljung-Box (L1) (Q):                  69.65   Jarque-Bera (JB):           6422892.15
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            12.42
Prob(H) (two-sided):                  0.00   Kurtosis:                       439.89
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.74e+29. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.01647, saving model to LSTM6.h5
10/10 - 4s - loss: 0.2634 - accuracy: 0.0000e+00 - val_loss: 0.0165 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 397ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.01647
10/10 - 0s - loss: 0.1891 - accuracy: 0.0000e+00 - val_loss: 0.0248 - val_accuracy: 0.0037 - lr: 0.0010 - 70ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01647
10/10 - 0s - loss: 0.0093 - accuracy: 0.0000e+00 - val_loss: 0.0261 - val_accuracy: 0.0037 - lr: 0.0010 - 77ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01647
10/10 - 0s - loss: 0.0087 - accuracy: 0.0000e+00 - val_loss: 0.0570 - val_accuracy: 0.0037 - lr: 0.0010 - 78ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.01647 to 0.01335, saving model to LSTM6.h5
10/10 - 0s - loss: 0.0097 - accuracy: 0.0000e+00 - val_loss: 0.0134 - val_accuracy: 0.0037 - lr: 0.0010 - 88ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.01335 to 0.01307, saving model to LSTM6.h5
10/10 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0131 - val_accuracy: 0.0037 - lr: 0.0010 - 86ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.01307 to 0.00753, saving model to LSTM6.h5
10/10 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 0.0010 - 93ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00753
10/10 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 0.0010 - 88ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00753
10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 0.0010 - 80ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00753
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 0.0010 - 66ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00753
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0118 - val_accuracy: 0.0037 - lr: 0.0010 - 67ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00012: val_loss did not improve from 0.00753
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0156 - val_accuracy: 0.0037 - lr: 0.0010 - 85ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00753
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0151 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 72ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00753
10/10 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 79ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00753
10/10 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0141 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 88ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00753
10/10 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 71ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00017: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.9688e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 70ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.9189e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.9169e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.9147e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.9125e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00022: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.9101e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 75ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.9077e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.9051e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.9025e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8998e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8970e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8941e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8911e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8881e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8850e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8817e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8785e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8751e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8717e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8682e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8647e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8610e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8574e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8537e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8499e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8460e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8422e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8382e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8342e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8302e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8261e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8220e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 75ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8178e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8136e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 75ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8094e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8050e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.8007e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.7963e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.7919e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.7874e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00753
10/10 - 0s - loss: 9.7828e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 00057: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 109.05489356902385 
RMSE:	 10.442935103170173 
MAPE:	 8.732625329283675

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 73.04142380966745 
RMSE:	 8.546427546622475 
MAPE:	 7.099244401385842

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 84.94978866341171 
RMSE:	 9.21682096296829 
MAPE:	 7.490547440692417

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 151.61070955572364 
RMSE:	 12.313030072070955 
MAPE:	 11.085595013418024
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16921.943, Time=10.24 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.592, Time=4.52 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16797.275, Time=9.03 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.592, Time=6.83 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16996.465, Time=3.42 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16999.509, Time=3.17 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17171.315, Time=6.58 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-16994.523, Time=3.65 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=-15518.026, Time=29.06 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 76.513 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood                8613.658
Date:                Sun, 12 Dec 2021   AIC                         -17171.315
Time:                        16:36:46   BIC                         -17039.972
Sample:                             0   HQIC                        -17120.874
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          -5.14e-10    7.6e-05  -6.76e-06      1.000      -0.000       0.000
x2         -5.041e-10   7.52e-05   -6.7e-06      1.000      -0.000       0.000
x3         -4.834e-10   7.38e-05  -6.55e-06      1.000      -0.000       0.000
x4             1.0000   7.46e-05   1.34e+04      0.000       1.000       1.000
x5         -4.462e-10   7.09e-05  -6.29e-06      1.000      -0.000       0.000
x6         -3.064e-09      0.000  -1.84e-05      1.000      -0.000       0.000
x7         -4.751e-10   7.35e-05  -6.46e-06      1.000      -0.000       0.000
x8         -4.628e-10   7.28e-05  -6.36e-06      1.000      -0.000       0.000
x9          -9.21e-11   9.37e-06  -9.83e-06      1.000   -1.84e-05    1.84e-05
x10        -2.165e-10    3.1e-05  -6.98e-06      1.000   -6.08e-05    6.08e-05
x11        -4.665e-10   7.28e-05  -6.41e-06      1.000      -0.000       0.000
x12         -4.62e-10   7.23e-05  -6.39e-06      1.000      -0.000       0.000
x13        -4.906e-10   7.43e-05   -6.6e-06      1.000      -0.000       0.000
x14        -3.985e-09      0.000  -1.87e-05      1.000      -0.000       0.000
x15        -4.897e-10   7.48e-05  -6.55e-06      1.000      -0.000       0.000
x16        -7.327e-10   9.24e-05  -7.93e-06      1.000      -0.000       0.000
x17        -4.173e-10   6.93e-05  -6.02e-06      1.000      -0.000       0.000
x18        -3.397e-10   6.02e-05  -5.64e-06      1.000      -0.000       0.000
x19        -6.012e-10    8.3e-05  -7.25e-06      1.000      -0.000       0.000
x20         -9.09e-10      0.000  -9.05e-06      1.000      -0.000       0.000
x21        -6.188e-09      0.000  -2.32e-05      1.000      -0.001       0.001
x22        -1.992e-09      0.000  -1.33e-05      1.000      -0.000       0.000
x23        -3.669e-09      0.000  -1.79e-05      1.000      -0.000       0.000
x24        -1.065e-09      0.000  -1.01e-05      1.000      -0.000       0.000
ar.L1         -1.2073   5.73e-10  -2.11e+09      0.000      -1.207      -1.207
ar.L2         -0.9083   5.93e-10  -1.53e+09      0.000      -0.908      -0.908
ar.L3         -0.4033   5.84e-10  -6.91e+08      0.000      -0.403      -0.403
sigma2       8.06e-11   6.94e-11      1.162      0.245   -5.54e-11    2.17e-10
===================================================================================
Ljung-Box (L1) (Q):                  13.77   Jarque-Bera (JB):           2436796.68
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             4.07
Prob(H) (two-sided):                  0.00   Kurtosis:                       272.41
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.23e+28. Standard errors may be unstable.
ARIMA order: (3, 3, 0) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.10479, saving model to LSTM6.h5
45/45 - 5s - loss: 0.1250 - accuracy: 0.0000e+00 - val_loss: 0.1048 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 5s/epoch - 102ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.10479 to 0.08778, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0877 - accuracy: 0.0000e+00 - val_loss: 0.0878 - val_accuracy: 0.0037 - lr: 0.0010 - 295ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.08778 to 0.00637, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0120 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 0.0010 - 308ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.00637
45/45 - 0s - loss: 0.0063 - accuracy: 0.0000e+00 - val_loss: 0.0347 - val_accuracy: 0.0037 - lr: 0.0010 - 295ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.00637 to 0.00356, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0064 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 0.0010 - 263ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00356
45/45 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 0.0010 - 259ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00356
45/45 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 0.0010 - 262ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00356
45/45 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 0.0010 - 264ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00356
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 0.0010 - 244ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.00356 to 0.00305, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 0.0010 - 289ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 0.0010 - 250ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 0.0010 - 240ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0026 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 0.0010 - 269ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0064 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 0.0010 - 286ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00015: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0176 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 0.0010 - 271ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0164 - accuracy: 0.0000e+00 - val_loss: 0.0405 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 253ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0120 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 258ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0059 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 268ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0042 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 280ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00020: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0032 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 247ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00025: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00305
45/45 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss improved from 0.00305 to 0.00302, saving model to LSTM6.h5
45/45 - 1s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 728ms/epoch - 16ms/step
Epoch 46/500

Epoch 00046: val_loss improved from 0.00302 to 0.00298, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss improved from 0.00298 to 0.00294, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 306ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss improved from 0.00294 to 0.00291, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss improved from 0.00291 to 0.00288, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 309ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss improved from 0.00288 to 0.00285, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 297ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss improved from 0.00285 to 0.00282, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss improved from 0.00282 to 0.00280, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 294ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss improved from 0.00280 to 0.00278, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss improved from 0.00278 to 0.00276, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss improved from 0.00276 to 0.00274, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss improved from 0.00274 to 0.00273, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss improved from 0.00273 to 0.00272, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 309ms/epoch - 7ms/step
Epoch 58/500

Epoch 00058: val_loss improved from 0.00272 to 0.00272, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss improved from 0.00272 to 0.00271, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 60/500

Epoch 00060: val_loss improved from 0.00271 to 0.00271, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 61/500

Epoch 00061: val_loss improved from 0.00271 to 0.00271, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.00271
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.9892e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.9291e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.8710e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.8146e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 6ms/step
Epoch 84/500

Epoch 00084: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.7599e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 85/500

Epoch 00085: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.7069e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 86/500

Epoch 00086: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.6554e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 87/500

Epoch 00087: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.6053e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step
Epoch 88/500

Epoch 00088: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.5565e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 89/500

Epoch 00089: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.5091e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 6ms/step
Epoch 90/500

Epoch 00090: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.4628e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 91/500

Epoch 00091: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.4176e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 92/500

Epoch 00092: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.3735e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step
Epoch 93/500

Epoch 00093: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.3304e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 94/500

Epoch 00094: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.2883e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step
Epoch 95/500

Epoch 00095: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.2470e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 96/500

Epoch 00096: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.2065e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 97/500

Epoch 00097: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.1668e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step
Epoch 98/500

Epoch 00098: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.1279e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 99/500

Epoch 00099: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.0896e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 308ms/epoch - 7ms/step
Epoch 100/500

Epoch 00100: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.0519e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 101/500

Epoch 00101: val_loss did not improve from 0.00271
45/45 - 0s - loss: 9.0149e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step
Epoch 102/500

Epoch 00102: val_loss did not improve from 0.00271
45/45 - 0s - loss: 8.9784e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 103/500

Epoch 00103: val_loss did not improve from 0.00271
45/45 - 0s - loss: 8.9425e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 104/500

Epoch 00104: val_loss did not improve from 0.00271
45/45 - 0s - loss: 8.9070e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 247ms/epoch - 5ms/step
Epoch 105/500

Epoch 00105: val_loss did not improve from 0.00271
45/45 - 0s - loss: 8.8720e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step
Epoch 106/500

Epoch 00106: val_loss did not improve from 0.00271
45/45 - 0s - loss: 8.8375e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 107/500

Epoch 00107: val_loss did not improve from 0.00271
45/45 - 0s - loss: 8.8034e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 108/500

Epoch 00108: val_loss did not improve from 0.00271
45/45 - 0s - loss: 8.7696e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 109/500

Epoch 00109: val_loss did not improve from 0.00271
45/45 - 0s - loss: 8.7363e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step
Epoch 110/500

Epoch 00110: val_loss did not improve from 0.00271
45/45 - 0s - loss: 8.7033e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 111/500

Epoch 00111: val_loss did not improve from 0.00271
45/45 - 0s - loss: 8.6706e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 00111: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 109.05489356902385 
RMSE:	 10.442935103170173 
MAPE:	 8.732625329283675

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 73.04142380966745 
RMSE:	 8.546427546622475 
MAPE:	 7.099244401385842

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 84.94978866341171 
RMSE:	 9.21682096296829 
MAPE:	 7.490547440692417

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 151.61070955572364 
RMSE:	 12.313030072070955 
MAPE:	 11.085595013418024

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 80.0014001509936 
RMSE:	 8.944350180476702 
MAPE:	 7.358601729961256
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.768, Time=3.03 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.591, Time=4.53 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15581.065, Time=8.89 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.591, Time=7.32 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16536.628, Time=10.00 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-13971.493, Time=10.04 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17226.044, Time=20.24 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.591, Time=9.42 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16754.945, Time=19.03 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-15001.855, Time=20.62 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 113.134 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8640.022
Date:                Sun, 12 Dec 2021   AIC                         -17226.044
Time:                        16:40:50   BIC                         -17099.391
Sample:                             0   HQIC                        -17177.404
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.031e-09   1.06e-05     -0.000      1.000   -2.08e-05    2.08e-05
x2          -4.99e-09   8.12e-06     -0.001      1.000   -1.59e-05    1.59e-05
x3         -5.114e-09   1.38e-05     -0.000      1.000    -2.7e-05     2.7e-05
x4             1.0000   8.91e-06   1.12e+05      0.000       1.000       1.000
x5          -4.55e-09    8.2e-06     -0.001      1.000   -1.61e-05    1.61e-05
x6         -9.992e-08      0.001     -0.000      1.000      -0.002       0.002
x7         -4.607e-09   1.97e-05     -0.000      1.000   -3.86e-05    3.86e-05
x8         -4.591e-09   1.77e-05     -0.000      1.000   -3.48e-05    3.48e-05
x9         -2.538e-09   1.13e-05     -0.000      1.000   -2.21e-05    2.21e-05
x10        -4.315e-09   6.08e-06     -0.001      0.999   -1.19e-05    1.19e-05
x11        -4.545e-09   1.62e-05     -0.000      1.000   -3.18e-05    3.18e-05
x12        -4.701e-09   1.97e-05     -0.000      1.000   -3.87e-05    3.87e-05
x13        -4.823e-09   1.18e-05     -0.000      1.000    -2.3e-05     2.3e-05
x14         -4.08e-08   4.99e-05     -0.001      0.999   -9.79e-05    9.78e-05
x15        -5.557e-09   2.03e-05     -0.000      1.000   -3.99e-05    3.99e-05
x16        -3.541e-09    1.3e-05     -0.000      1.000   -2.55e-05    2.55e-05
x17        -3.463e-09   1.51e-05     -0.000      1.000   -2.97e-05    2.97e-05
x18        -1.534e-08      4e-05     -0.000      1.000   -7.85e-05    7.85e-05
x19        -6.118e-09   2.07e-05     -0.000      1.000   -4.05e-05    4.05e-05
x20        -1.581e-08   3.38e-05     -0.000      1.000   -6.62e-05    6.61e-05
x21        -5.505e-08    5.6e-05     -0.001      0.999      -0.000       0.000
x22        -2.936e-08   4.55e-05     -0.001      0.999   -8.92e-05    8.92e-05
x23        -3.882e-08   4.89e-05     -0.001      0.999   -9.58e-05    9.57e-05
x24        -2.099e-08   4.87e-05     -0.000      1.000   -9.54e-05    9.54e-05
ma.L1         -1.3900   1.23e-07  -1.13e+07      0.000      -1.390      -1.390
ma.L2          0.4044   1.43e-07   2.82e+06      0.000       0.404       0.404
sigma2      7.525e-11   7.22e-11      1.042      0.297   -6.63e-11    2.17e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.84   Jarque-Bera (JB):           1335305.59
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.09   Skew:                             5.74
Prob(H) (two-sided):                  0.00   Kurtosis:                       202.19
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.77e+23. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03788, saving model to LSTM6.h5
58/58 - 4s - loss: 0.1476 - accuracy: 0.0000e+00 - val_loss: 0.0379 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 68ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.03788 to 0.00696, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0220 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 0.0010 - 347ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.00696
58/58 - 0s - loss: 0.0223 - accuracy: 0.0000e+00 - val_loss: 0.0802 - val_accuracy: 0.0037 - lr: 0.0010 - 352ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.00696
58/58 - 0s - loss: 0.0103 - accuracy: 0.0000e+00 - val_loss: 0.0215 - val_accuracy: 0.0037 - lr: 0.0010 - 322ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00696
58/58 - 0s - loss: 0.0040 - accuracy: 0.0000e+00 - val_loss: 0.0176 - val_accuracy: 0.0037 - lr: 0.0010 - 331ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00696
58/58 - 0s - loss: 0.0066 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 0.0010 - 369ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.00696 to 0.00664, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0061 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 0.0010 - 346ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00664
58/58 - 0s - loss: 0.0104 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 0.0010 - 315ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00664
58/58 - 0s - loss: 0.0191 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 0.0010 - 330ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.00664 to 0.00656, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0280 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 0.0010 - 326ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00656
58/58 - 0s - loss: 0.0214 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 0.0010 - 346ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00012: val_loss did not improve from 0.00656
58/58 - 0s - loss: 0.0049 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 0.0010 - 357ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00656
58/58 - 0s - loss: 0.0088 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 314ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.00656 to 0.00435, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0060 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 345ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.00435 to 0.00367, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0041 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 364ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0031 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 302ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 314ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 354ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 317ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00020: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 355ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 354ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 311ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 326ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 327ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00025: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 301ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 330ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 336ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 332ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 355ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 306ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 312ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 372ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 329ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 332ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 331ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 318ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00367
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 324ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.9583e-04 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 313ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.8837e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.8109e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 336ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.7399e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.6709e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 325ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.6037e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 320ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.5385e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 313ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.4751e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 313ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.4136e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 318ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.3539e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 340ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.2961e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 317ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.2400e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.1856e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.1329e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 306ms/epoch - 5ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.0818e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 338ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00367
58/58 - 0s - loss: 9.0322e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 332ms/epoch - 6ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00367
58/58 - 0s - loss: 8.9842e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 324ms/epoch - 6ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00367
58/58 - 0s - loss: 8.9375e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 342ms/epoch - 6ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00367
58/58 - 0s - loss: 8.8922e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 337ms/epoch - 6ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00367
58/58 - 0s - loss: 8.8481e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 6ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00367
58/58 - 0s - loss: 8.8053e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 329ms/epoch - 6ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00367
58/58 - 0s - loss: 8.7636e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 364ms/epoch - 6ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00367
58/58 - 0s - loss: 8.7229e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 326ms/epoch - 6ms/step
Epoch 00065: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 109.05489356902385 
RMSE:	 10.442935103170173 
MAPE:	 8.732625329283675

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 73.04142380966745 
RMSE:	 8.546427546622475 
MAPE:	 7.099244401385842

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 84.94978866341171 
RMSE:	 9.21682096296829 
MAPE:	 7.490547440692417

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 151.61070955572364 
RMSE:	 12.313030072070955 
MAPE:	 11.085595013418024

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 80.0014001509936 
RMSE:	 8.944350180476702 
MAPE:	 7.358601729961256

MIDPOINT
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 61.18379984959283 
RMSE:	 7.822007405365507 
MAPE:	 6.441728946960992
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17000.569, Time=3.20 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-15576.554, Time=5.82 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16078.305, Time=8.22 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-15574.554, Time=9.60 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16998.627, Time=3.46 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16429.916, Time=12.37 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17000.664, Time=3.29 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-15700.026, Time=11.25 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-15704.282, Time=15.04 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-16998.664, Time=3.21 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 75.481 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8527.332
Date:                Sun, 12 Dec 2021   AIC                         -17000.664
Time:                        16:46:57   BIC                         -16874.011
Sample:                             0   HQIC                        -16952.024
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          8.378e-14   2.16e-06   3.89e-08      1.000   -4.23e-06    4.23e-06
x2          7.457e-14   2.15e-06   3.47e-08      1.000   -4.22e-06    4.22e-06
x3          2.279e-14   2.16e-06   1.05e-08      1.000   -4.24e-06    4.24e-06
x4             1.0000   2.16e-06   4.63e+05      0.000       1.000       1.000
x5          1.211e-12   2.07e-06   5.86e-07      1.000   -4.05e-06    4.05e-06
x6          3.146e-15   2.67e-06   1.18e-09      1.000   -5.23e-06    5.23e-06
x7          1.593e-13   2.15e-06   7.41e-08      1.000   -4.21e-06    4.21e-06
x8            -0.0001    2.1e-06    -48.778      0.000      -0.000   -9.82e-05
x9          5.141e-14   6.35e-07    8.1e-08      1.000   -1.24e-06    1.24e-06
x10        -6.174e-05   1.34e-06    -45.995      0.000   -6.44e-05   -5.91e-05
x11            0.0003   2.15e-06    148.354      0.000       0.000       0.000
x12           -0.0002   2.02e-06    -93.730      0.000      -0.000      -0.000
x13         1.967e-14   2.16e-06    9.1e-09      1.000   -4.23e-06    4.23e-06
x14        -1.297e-14   5.65e-06  -2.29e-09      1.000   -1.11e-05    1.11e-05
x15         -3.18e-12   1.82e-06  -1.75e-06      1.000   -3.57e-06    3.57e-06
x16        -1.426e-12   4.51e-06  -3.16e-07      1.000   -8.84e-06    8.84e-06
x17         7.474e-13   2.37e-06   3.16e-07      1.000   -4.64e-06    4.64e-06
x18         -2.92e-13    2.9e-06  -1.01e-07      1.000   -5.68e-06    5.68e-06
x19        -4.211e-14   1.89e-06  -2.22e-08      1.000   -3.71e-06    3.71e-06
x20        -1.515e-13    1.2e-06  -1.26e-07      1.000   -2.36e-06    2.36e-06
x21         6.555e-13   6.37e-06   1.03e-07      1.000   -1.25e-05    1.25e-05
x22         1.212e-14   6.19e-06   1.96e-09      1.000   -1.21e-05    1.21e-05
x23        -3.877e-13   3.76e-06  -1.03e-07      1.000   -7.38e-06    7.38e-06
x24         8.127e-15   4.01e-06   2.03e-09      1.000   -7.86e-06    7.86e-06
ma.L1         -1.3370   3.84e-12  -3.48e+11      0.000      -1.337      -1.337
ma.L2          0.4289   1.65e-12    2.6e+11      0.000       0.429       0.429
sigma2          1e-10   6.99e-11      1.430      0.153   -3.71e-11    2.37e-10
===================================================================================
Ljung-Box (L1) (Q):                   4.57   Jarque-Bera (JB):           3228712.87
Prob(Q):                              0.03   Prob(JB):                         0.00
Heteroskedasticity (H):               0.12   Skew:                            -9.87
Prob(H) (two-sided):                  0.00   Kurtosis:                       312.63
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.57e+30. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.07461, saving model to LSTM6.h5
43/43 - 4s - loss: 0.1214 - accuracy: 0.0000e+00 - val_loss: 0.0746 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 104ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.07461 to 0.04371, saving model to LSTM6.h5
43/43 - 0s - loss: 0.0526 - accuracy: 0.0000e+00 - val_loss: 0.0437 - val_accuracy: 0.0037 - lr: 0.0010 - 291ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04371 to 0.00965, saving model to LSTM6.h5
43/43 - 0s - loss: 0.0168 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 0.0010 - 267ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.00965
43/43 - 0s - loss: 0.0040 - accuracy: 0.0000e+00 - val_loss: 0.0155 - val_accuracy: 0.0037 - lr: 0.0010 - 267ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.00965 to 0.00469, saving model to LSTM6.h5
43/43 - 0s - loss: 0.0071 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 0.0010 - 286ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00469
43/43 - 0s - loss: 0.0026 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 0.0010 - 289ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00469
43/43 - 0s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 0.0010 - 271ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00469
43/43 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 0.0010 - 278ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00469
43/43 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 0.0010 - 241ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.00469
43/43 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 0.0010 - 235ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00469
43/43 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0252 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 279ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00469
43/43 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0176 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 275ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00469
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0160 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 235ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00469
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0151 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 250ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.00469
43/43 - 0s - loss: 9.6727e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 233ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.9876e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.9638e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.9429e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 239ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.9234e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.9045e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.8859e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.8677e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.8496e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.8318e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.8142e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.7968e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.7796e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.7627e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.7460e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.7295e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.7132e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.6972e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.6814e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.6658e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.6504e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.6352e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.6202e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.6055e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 286ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.5909e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.5765e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.5624e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.5484e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.5346e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.5209e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.5074e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.4941e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.4809e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.4679e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.4550e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.4422e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.4296e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.4170e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.4046e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.3922e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00469
43/43 - 0s - loss: 8.3800e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 109.05489356902385 
RMSE:	 10.442935103170173 
MAPE:	 8.732625329283675

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 73.04142380966745 
RMSE:	 8.546427546622475 
MAPE:	 7.099244401385842

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 84.94978866341171 
RMSE:	 9.21682096296829 
MAPE:	 7.490547440692417

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 151.61070955572364 
RMSE:	 12.313030072070955 
MAPE:	 11.085595013418024

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 80.0014001509936 
RMSE:	 8.944350180476702 
MAPE:	 7.358601729961256

MIDPOINT
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 61.18379984959283 
RMSE:	 7.822007405365507 
MAPE:	 6.441728946960992

T3
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 110.56518298054853 
RMSE:	 10.514998001927937 
MAPE:	 8.473394546481362
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16762.799, Time=4.59 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14158.507, Time=2.83 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16445.598, Time=8.85 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-16144.282, Time=11.24 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15275.101, Time=9.09 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15897.090, Time=13.07 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16446.973, Time=9.72 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16567.628, Time=3.34 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16523.926, Time=3.78 sec
 ARIMA(1,3,1)(0,0,0)[0] intercept   : AIC=-16696.008, Time=3.35 sec

Best model:  ARIMA(1,3,1)(0,0,0)[0]          
Total fit time: 69.865 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 1)   Log Likelihood                8408.400
Date:                Sun, 12 Dec 2021   AIC                         -16762.799
Time:                        16:52:44   BIC                         -16636.147
Sample:                             0   HQIC                        -16714.159
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.289e-07      0.001     -0.000      1.000      -0.002       0.002
x2         -5.288e-07      0.001     -0.001      0.999      -0.002       0.002
x3         -5.306e-07      0.001     -0.000      1.000      -0.002       0.002
x4             1.0000      0.000   2045.695      0.000       0.999       1.001
x5         -5.041e-07      0.000     -0.001      0.999      -0.001       0.001
x6         -9.879e-07   4.33e-05     -0.023      0.982   -8.58e-05    8.38e-05
x7         -5.185e-07      0.001     -0.001      0.999      -0.001       0.001
x8             0.0001      0.000      0.643      0.520      -0.000       0.001
x9          9.794e-08      0.001      0.000      1.000      -0.001       0.001
x10            0.0001      0.000      0.313      0.754      -0.001       0.001
x11           -0.0004      0.000     -2.284      0.022      -0.001   -6.06e-05
x12            0.0005      0.000      2.453      0.014       0.000       0.001
x13        -5.277e-07      0.000     -0.002      0.999      -0.001       0.001
x14        -1.566e-06      0.000     -0.005      0.996      -0.001       0.001
x15        -5.136e-07   9.86e-05     -0.005      0.996      -0.000       0.000
x16         -7.66e-07      0.000     -0.002      0.999      -0.001       0.001
x17        -5.146e-07      0.000     -0.003      0.998      -0.000       0.000
x18        -1.701e-07      0.001     -0.000      1.000      -0.001       0.001
x19         -5.77e-07   8.54e-05     -0.007      0.995      -0.000       0.000
x20         5.026e-07      0.001      0.001      0.999      -0.001       0.001
x21        -2.058e-06      0.000     -0.010      0.992      -0.000       0.000
x22        -1.098e-06      0.001     -0.001      0.999      -0.003       0.003
x23        -1.472e-06      0.001     -0.003      0.998      -0.001       0.001
x24        -8.255e-07      0.001     -0.001      0.999      -0.002       0.002
ar.L1         -0.2866   3.63e-05  -7897.273      0.000      -0.287      -0.287
ma.L1         -0.9124   1.46e-06  -6.25e+05      0.000      -0.912      -0.912
sigma2       9.98e-11   7.23e-11      1.380      0.168    -4.2e-11    2.42e-10
===================================================================================
Ljung-Box (L1) (Q):                  83.51   Jarque-Bera (JB):           4742889.91
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            -5.71
Prob(H) (two-sided):                  0.00   Kurtosis:                       378.86
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.2e+22. Standard errors may be unstable.
ARIMA order: (1, 3, 1) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05037, saving model to LSTM6.h5
90/90 - 5s - loss: 0.0961 - accuracy: 0.0000e+00 - val_loss: 0.0504 - val_accuracy: 0.0037 - lr: 0.0010 - 5s/epoch - 50ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05037
90/90 - 0s - loss: 0.1160 - accuracy: 0.0000e+00 - val_loss: 0.2484 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 482ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.05037 to 0.01092, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0670 - accuracy: 0.0000e+00 - val_loss: 0.0109 - val_accuracy: 0.0037 - lr: 0.0010 - 529ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01092
90/90 - 1s - loss: 0.0270 - accuracy: 0.0000e+00 - val_loss: 0.0156 - val_accuracy: 0.0037 - lr: 0.0010 - 503ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01092
90/90 - 1s - loss: 0.0116 - accuracy: 0.0000e+00 - val_loss: 0.0272 - val_accuracy: 0.0037 - lr: 0.0010 - 527ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.01092
90/90 - 1s - loss: 0.0106 - accuracy: 0.0000e+00 - val_loss: 0.0153 - val_accuracy: 0.0037 - lr: 0.0010 - 512ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01092
90/90 - 1s - loss: 0.0075 - accuracy: 0.0000e+00 - val_loss: 0.0133 - val_accuracy: 0.0037 - lr: 0.0010 - 535ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.01092
90/90 - 1s - loss: 0.0119 - accuracy: 0.0000e+00 - val_loss: 0.0256 - val_accuracy: 0.0037 - lr: 0.0010 - 509ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01092
90/90 - 1s - loss: 0.0135 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 574ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.01092 to 0.01013, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0100 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 515ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.01013 to 0.00804, saving model to LSTM6.h5
90/90 - 0s - loss: 0.0068 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 497ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.00804 to 0.00696, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0053 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 551ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.00696 to 0.00632, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0043 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 512ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.00632 to 0.00592, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0036 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 506ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.00592 to 0.00572, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 562ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.00572 to 0.00568, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 518ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00568
90/90 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 482ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00568
90/90 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 454ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00568
90/90 - 1s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 502ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00020: val_loss did not improve from 0.00568
90/90 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 491ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00568
90/90 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 502ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00568
90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 486ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00568
90/90 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 532ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00568
90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 477ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00025: val_loss did not improve from 0.00568
90/90 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 608ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00568
90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 459ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00568
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0095 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 562ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00568
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 470ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00568
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 533ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00568
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 480ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00568
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 492ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00568
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 558ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00568
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 550ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00568
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0109 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 481ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00568
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 504ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00568
90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 557ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00568
90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 471ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00568
90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 483ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00568
90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 506ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00568
90/90 - 0s - loss: 9.9794e-04 - accuracy: 0.0000e+00 - val_loss: 0.0124 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 478ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00568
90/90 - 1s - loss: 9.8785e-04 - accuracy: 0.0000e+00 - val_loss: 0.0127 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 560ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00568
90/90 - 1s - loss: 9.7796e-04 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 503ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00568
90/90 - 0s - loss: 9.6828e-04 - accuracy: 0.0000e+00 - val_loss: 0.0133 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 472ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00568
90/90 - 1s - loss: 9.5883e-04 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 560ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00568
90/90 - 0s - loss: 9.4960e-04 - accuracy: 0.0000e+00 - val_loss: 0.0139 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 492ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00568
90/90 - 1s - loss: 9.4060e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 509ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00568
90/90 - 0s - loss: 9.3183e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 454ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00568
90/90 - 1s - loss: 9.2329e-04 - accuracy: 0.0000e+00 - val_loss: 0.0149 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 561ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00568
90/90 - 1s - loss: 9.1498e-04 - accuracy: 0.0000e+00 - val_loss: 0.0153 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 528ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00568
90/90 - 1s - loss: 9.0690e-04 - accuracy: 0.0000e+00 - val_loss: 0.0156 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 503ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.9904e-04 - accuracy: 0.0000e+00 - val_loss: 0.0160 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 462ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.9141e-04 - accuracy: 0.0000e+00 - val_loss: 0.0164 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 484ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.8398e-04 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 476ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00568
90/90 - 1s - loss: 8.7677e-04 - accuracy: 0.0000e+00 - val_loss: 0.0172 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 564ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.6976e-04 - accuracy: 0.0000e+00 - val_loss: 0.0176 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 480ms/epoch - 5ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.6294e-04 - accuracy: 0.0000e+00 - val_loss: 0.0180 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 497ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.5630e-04 - accuracy: 0.0000e+00 - val_loss: 0.0184 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 458ms/epoch - 5ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.4984e-04 - accuracy: 0.0000e+00 - val_loss: 0.0188 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 493ms/epoch - 5ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.4355e-04 - accuracy: 0.0000e+00 - val_loss: 0.0193 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 475ms/epoch - 5ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.3742e-04 - accuracy: 0.0000e+00 - val_loss: 0.0197 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 475ms/epoch - 5ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.3145e-04 - accuracy: 0.0000e+00 - val_loss: 0.0202 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 459ms/epoch - 5ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.2562e-04 - accuracy: 0.0000e+00 - val_loss: 0.0206 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 496ms/epoch - 6ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.1992e-04 - accuracy: 0.0000e+00 - val_loss: 0.0211 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 479ms/epoch - 5ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00568
90/90 - 1s - loss: 8.1436e-04 - accuracy: 0.0000e+00 - val_loss: 0.0215 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 587ms/epoch - 7ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00568
90/90 - 0s - loss: 8.0893e-04 - accuracy: 0.0000e+00 - val_loss: 0.0220 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 478ms/epoch - 5ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00568
90/90 - 1s - loss: 8.0362e-04 - accuracy: 0.0000e+00 - val_loss: 0.0224 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 540ms/epoch - 6ms/step
Epoch 00066: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 109.05489356902385 
RMSE:	 10.442935103170173 
MAPE:	 8.732625329283675

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 73.04142380966745 
RMSE:	 8.546427546622475 
MAPE:	 7.099244401385842

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 84.94978866341171 
RMSE:	 9.21682096296829 
MAPE:	 7.490547440692417

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 151.61070955572364 
RMSE:	 12.313030072070955 
MAPE:	 11.085595013418024

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 80.0014001509936 
RMSE:	 8.944350180476702 
MAPE:	 7.358601729961256

MIDPOINT
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 61.18379984959283 
RMSE:	 7.822007405365507 
MAPE:	 6.441728946960992

T3
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 110.56518298054853 
RMSE:	 10.514998001927937 
MAPE:	 8.473394546481362

TEMA
Prediction vs Close:		50.37% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 69.56753550271695 
RMSE:	 8.340715527022663 
MAPE:	 7.185876850367952
Runtime: mins: 54.85666802865001

Architecture Used

In [128]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment6.png to Experiment6 (2).png
In [130]:
img = cv2.imread('Experiment6.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[130]:
<matplotlib.image.AxesImage at 0x7f7662efeb90>

Model Plots

In [106]:
with open('simulation6_data.json') as json_file:
    simulation6 = json.load(json_file)
fileimg = 'Experiment6'
In [107]:
for i in range(len(list(simulation6.keys()))):
  SIM = list(simulation6.keys())[i]
  plot_train(simulation6,SIM)
  plot_test(simulation6,SIM)
----- Train RMSE for SMA ----- 8.884485327179771
----- Train_MSE_LSTM for SMA ----- 78.93407952887263
----- Train MAE LSTM for SMA ----- 7.7230452013589295
----- Test RMSE for SMA----- 10.442935103170173
----- Test_MSE_LSTM for SMA----- 109.05489356902385
----- Test_MAE_LSTM for SMA----- 8.732625329283675
----- Train RMSE for EMA ----- 10.171650757359036
----- Train_MSE_LSTM for EMA ----- 103.46247912968265
----- Train MAE LSTM for EMA ----- 8.997792035349331
----- Test RMSE for EMA----- 8.546427546622475
----- Test_MSE_LSTM for EMA----- 73.04142380966745
----- Test_MAE_LSTM for EMA----- 7.099244401385842
----- Train RMSE for WMA ----- 10.46606060205442
----- Train_MSE_LSTM for WMA ----- 109.53842452587571
----- Train MAE LSTM for WMA ----- 9.32499706302078
----- Test RMSE for WMA----- 9.21682096296829
----- Test_MSE_LSTM for WMA----- 84.94978866341171
----- Test_MAE_LSTM for WMA----- 7.490547440692417
----- Train RMSE for DEMA ----- 12.135400340426562
----- Train_MSE_LSTM for DEMA ----- 147.26794142242514
----- Train MAE LSTM for DEMA ----- 10.923669650609977
----- Test RMSE for DEMA----- 12.313030072070955
----- Test_MSE_LSTM for DEMA----- 151.61070955572364
----- Test_MAE_LSTM for DEMA----- 11.085595013418024
----- Train RMSE for KAMA ----- 10.536710095405018
----- Train_MSE_LSTM for KAMA ----- 111.02225963461002
----- Train MAE LSTM for KAMA ----- 9.489313762266152
----- Test RMSE for KAMA----- 8.944350180476702
----- Test_MSE_LSTM for KAMA----- 80.0014001509936
----- Test_MAE_LSTM for KAMA----- 7.358601729961256
----- Train RMSE for MIDPOINT ----- 9.483224784900598
----- Train_MSE_LSTM for MIDPOINT ----- 89.93155232095299
----- Train MAE LSTM for MIDPOINT ----- 8.402424233592125
----- Test RMSE for MIDPOINT----- 7.822007405365507
----- Test_MSE_LSTM for MIDPOINT----- 61.18379984959283
----- Test_MAE_LSTM for MIDPOINT----- 6.441728946960992
----- Train RMSE for T3 ----- 12.056010756671938
----- Train_MSE_LSTM for T3 ----- 145.3473953649895
----- Train MAE LSTM for T3 ----- 10.855993463265904
----- Test RMSE for T3----- 10.514998001927937
----- Test_MSE_LSTM for T3----- 110.56518298054853
----- Test_MAE_LSTM for T3----- 8.473394546481362
----- Train RMSE for TEMA ----- 7.432705233614638
----- Train_MSE_LSTM for TEMA ----- 55.24510708980243
----- Train MAE LSTM for TEMA ----- 5.131892957309109
----- Test RMSE for TEMA----- 8.340715527022663
----- Test_MSE_LSTM for TEMA----- 69.56753550271695
----- Test_MAE_LSTM for TEMA----- 7.185876850367952

Arima w Exogenous Variable Multistep MutiVariate LSTM Hybrid Model Experiment 7

In [131]:
def get_arima_exog(dataframe,original_data, train_len, test_len):    
    

    # prepare train and test data for exogenous vr
    X_value = pd.DataFrame(low_vol.iloc[:, :])
    y_value = pd.DataFrame(low_vol.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    X_scale_dataset = X_scaler.fit_transform(X_value)
    y_scale_dataset = y_scaler.fit_transform(y_value)
    # Get data and check shape
    # X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X_scale_dataset)
    y_train, y_test, = split_train_test(y_scale_dataset)
    yc_train,yc_test = split_train_test(low_vol_data)
    yc = yc_test.values.tolist()
    y_train_list = y_train.flatten().tolist()
    y_test_list = y_test.flatten().tolist()
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)

    # Initialize model
    model = auto_arima(y_train_list,exogenous  = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
            suppress_warnings=True,stepwise=True,seasonal=True)

      # Determine model parameters
    print(model.summary())
    model.fit(y_train_list,maxiter=200)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

      # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
        model = pmdarima.ARIMA(order=order)
        model.fit(y_train_list)
        # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')

        prediction.append(model.predict()[0])
        y_train_list.append(y_test_list[i])

    predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
    y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))

    # Generate error data
    mse = mean_squared_error(yc_test, predictionte)
    rmse = mse ** 0.5
    mae = mean_absolute_error(y_test_ , predictionte )
    return yc,predictionte.flatten().tolist(), mse, rmse, mae
In [132]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det =20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma+' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 3
    # define custom activation
    # 
    class Double_Tanh(Activation):
        def __init__(self, activation, **kwargs):
            super(Double_Tanh, self).__init__(activation, **kwargs)
            self.__name__ = 'double_tanh'

    def double_tanh(x):
        return (K.tanh(x) * 2)

    get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
        # Model Generation
    model = Sequential()
    #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    model.add(Dense(1))
    model.add(Activation(double_tanh))
    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [133]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation7 = {}
    imgfile = 'Experiment7'
    for ma in optimized_period:
                print(ma)
                print(functions[ma])
                print ( int( optimized_period[ma]))
              # if ma == 'SMA':
                low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
                low_vol = low_vol.fillna(0)
                low_vol_data = df['close']
                high_vol = pd.DataFrame()
                df2 = df.copy()
                for i in df2.columns:
                  if i in low_vol.columns:
                    high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
                high_vol_data = df['close']
                ## *****************************************************
                # Generate ARIMA and LSTM predictions
                print('\nWorking on ' + ma + ' predictions')
                try:
                  print('parameters used : ', train_len, test_len)
                  low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
                except:
                    print('ARIMA error, skipping to next MA type')
                    continue
                Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
                final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
                mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
                rmse_ftr = mse_ftr ** 0.5
                mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
                mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

                final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
                mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
                rmse = mse ** 0.5
                mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                # Generate prediction accuracy
                actual = df['close'].tail(test_len).values
                result_1 = []
                result_2 = []
                for i in range(1, len(final_prediction)):
                    # Compare prediction to previous close price
                    if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                        result_1.append(1)
                    elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                        result_1.append(1)
                    else:
                        result_1.append(0)

                    # Compare prediction to previous prediction
                    if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                        result_2.append(1)
                    elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                        result_2.append(1)
                    else:
                        result_2.append(0)

                accuracy_1 = np.mean(result_1)
                accuracy_2 = np.mean(result_2)

                simulation7[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                              'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                  'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                              'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                  'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                              'rmse': rmse_ftr, 'mae' : mae_ftr},
                                  'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                            'rmse': rmse, 'mae': mae },
                                  'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

                # save simulation data here as checkpoint
                with open('simulation7_data.json', 'w') as fp:
                    json.dump(simulation7, fp)

                for ma in simulation7.keys():
                    print('\n' + ma)
                    print('Prediction vs Close:\t\t' + str(round(100*simulation7[ma]['accuracy']['prediction vs close'], 2))
                          + '% Accuracy')
                    print('Prediction vs Prediction:\t' + str(round(100*simulation7[ma]['accuracy']['prediction vs prediction'], 2))
                          + '% Accuracy')
                    print('MSE:\t', simulation7[ma]['final']['mse'],
                          '\nRMSE:\t', simulation7[ma]['final']['rmse'],
                          '\nMAPE:\t', simulation7[ma]['final']['mae'])#,
                          # '\nMAPE:\t', simulation[ma]['final']['mape'])
              # else:
              #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.786, Time=3.39 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.592, Time=4.80 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15578.394, Time=8.54 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.592, Time=7.06 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16966.361, Time=9.28 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16121.635, Time=10.17 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17214.069, Time=13.33 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.592, Time=9.04 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-14572.319, Time=10.11 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-14403.474, Time=43.98 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 119.725 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8634.035
Date:                Sun, 12 Dec 2021   AIC                         -17214.069
Time:                        17:05:38   BIC                         -17087.416
Sample:                             0   HQIC                        -17165.429
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -4.257e-09   9.55e-06     -0.000      1.000   -1.87e-05    1.87e-05
x2         -4.256e-09   9.56e-06     -0.000      1.000   -1.87e-05    1.87e-05
x3         -4.313e-09   9.62e-06     -0.000      1.000   -1.89e-05    1.88e-05
x4             1.0000   9.61e-06   1.04e+05      0.000       1.000       1.000
x5         -3.891e-09   9.14e-06     -0.000      1.000   -1.79e-05    1.79e-05
x6         -1.122e-08   1.03e-05     -0.001      0.999   -2.03e-05    2.03e-05
x7         -4.223e-09   9.54e-06     -0.000      1.000   -1.87e-05    1.87e-05
x8         -4.234e-09   9.55e-06     -0.000      1.000   -1.87e-05    1.87e-05
x9         -1.626e-10   6.54e-07     -0.000      1.000   -1.28e-06    1.28e-06
x10        -6.831e-10   2.91e-06     -0.000      1.000    -5.7e-06     5.7e-06
x11        -4.115e-09   9.41e-06     -0.000      1.000   -1.84e-05    1.84e-05
x12        -4.303e-09   9.62e-06     -0.000      1.000   -1.89e-05    1.88e-05
x13        -4.288e-09    9.6e-06     -0.000      1.000   -1.88e-05    1.88e-05
x14        -3.749e-08   2.81e-05     -0.001      0.999   -5.51e-05     5.5e-05
x15        -5.032e-09   1.04e-05     -0.000      1.000   -2.04e-05    2.03e-05
x16        -3.685e-09      9e-06     -0.000      1.000   -1.76e-05    1.76e-05
x17        -3.286e-09   8.45e-06     -0.000      1.000   -1.66e-05    1.66e-05
x18         -1.22e-08   1.59e-05     -0.001      0.999   -3.11e-05    3.11e-05
x19        -5.685e-09    1.1e-05     -0.001      1.000   -2.16e-05    2.16e-05
x20         -1.42e-08   1.69e-05     -0.001      0.999   -3.32e-05    3.32e-05
x21        -5.194e-08   3.31e-05     -0.002      0.999   -6.49e-05    6.48e-05
x22        -2.548e-08   2.31e-05     -0.001      0.999   -4.53e-05    4.52e-05
x23        -3.534e-08   2.73e-05     -0.001      0.999   -5.35e-05    5.34e-05
x24        -1.566e-08    1.8e-05     -0.001      0.999   -3.53e-05    3.53e-05
ma.L1         -1.3899   4.98e-09  -2.79e+08      0.000      -1.390      -1.390
ma.L2          0.4032   4.98e-09   8.09e+07      0.000       0.403       0.403
sigma2      7.635e-11   6.92e-11      1.103      0.270   -5.93e-11    2.12e-10
===================================================================================
Ljung-Box (L1) (Q):                  68.48   Jarque-Bera (JB):           5579791.06
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            10.12
Prob(H) (two-sided):                  0.00   Kurtosis:                       410.36
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.69e+24. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 1.09957, saving model to LSTM7.h5
48/48 - 2s - loss: 0.1505 - mse: 0.1505 - mae: 0.3093 - val_loss: 1.0996 - val_mse: 1.0996 - val_mae: 1.0076 - lr: 0.0010 - 2s/epoch - 51ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 1.09957 to 0.56928, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0396 - mse: 0.0396 - mae: 0.1611 - val_loss: 0.5693 - val_mse: 0.5693 - val_mae: 0.7083 - lr: 0.0010 - 241ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.56928 to 0.47229, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0254 - mse: 0.0254 - mae: 0.1271 - val_loss: 0.4723 - val_mse: 0.4723 - val_mae: 0.6385 - lr: 0.0010 - 245ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0169 - mse: 0.0169 - mae: 0.1027 - val_loss: 0.5115 - val_mse: 0.5115 - val_mae: 0.6654 - lr: 0.0010 - 198ms/epoch - 4ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0845 - val_loss: 0.5410 - val_mse: 0.5410 - val_mae: 0.6857 - lr: 0.0010 - 234ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0792 - val_loss: 0.5219 - val_mse: 0.5219 - val_mae: 0.6726 - lr: 0.0010 - 231ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0689 - val_loss: 0.5238 - val_mse: 0.5238 - val_mae: 0.6751 - lr: 0.0010 - 233ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0634 - val_loss: 0.5717 - val_mse: 0.5717 - val_mae: 0.7081 - lr: 0.0010 - 231ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0584 - val_loss: 0.5696 - val_mse: 0.5696 - val_mae: 0.7067 - lr: 1.0000e-04 - 226ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0614 - val_loss: 0.5560 - val_mse: 0.5560 - val_mae: 0.6979 - lr: 1.0000e-04 - 234ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0602 - val_loss: 0.5515 - val_mse: 0.5515 - val_mae: 0.6949 - lr: 1.0000e-04 - 213ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0592 - val_loss: 0.5506 - val_mse: 0.5506 - val_mae: 0.6945 - lr: 1.0000e-04 - 231ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0580 - val_loss: 0.5461 - val_mse: 0.5461 - val_mae: 0.6916 - lr: 1.0000e-04 - 197ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0571 - val_loss: 0.5456 - val_mse: 0.5456 - val_mae: 0.6912 - lr: 1.0000e-05 - 204ms/epoch - 4ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0546 - val_loss: 0.5454 - val_mse: 0.5454 - val_mae: 0.6911 - lr: 1.0000e-05 - 212ms/epoch - 4ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0567 - val_loss: 0.5446 - val_mse: 0.5446 - val_mae: 0.6906 - lr: 1.0000e-05 - 210ms/epoch - 4ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0561 - val_loss: 0.5436 - val_mse: 0.5436 - val_mae: 0.6899 - lr: 1.0000e-05 - 204ms/epoch - 4ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0593 - val_loss: 0.5428 - val_mse: 0.5428 - val_mae: 0.6894 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0596 - val_loss: 0.5426 - val_mse: 0.5426 - val_mae: 0.6892 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0575 - val_loss: 0.5421 - val_mse: 0.5421 - val_mae: 0.6889 - lr: 1.0000e-05 - 202ms/epoch - 4ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0591 - val_loss: 0.5408 - val_mse: 0.5408 - val_mae: 0.6881 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0607 - val_loss: 0.5397 - val_mse: 0.5397 - val_mae: 0.6873 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0580 - val_loss: 0.5374 - val_mse: 0.5374 - val_mae: 0.6858 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0608 - val_loss: 0.5368 - val_mse: 0.5368 - val_mae: 0.6854 - lr: 1.0000e-05 - 196ms/epoch - 4ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0553 - val_loss: 0.5359 - val_mse: 0.5359 - val_mae: 0.6848 - lr: 1.0000e-05 - 194ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0589 - val_loss: 0.5355 - val_mse: 0.5355 - val_mae: 0.6845 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0581 - val_loss: 0.5351 - val_mse: 0.5351 - val_mae: 0.6843 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0561 - val_loss: 0.5346 - val_mse: 0.5346 - val_mae: 0.6839 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0571 - val_loss: 0.5329 - val_mse: 0.5329 - val_mae: 0.6828 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0589 - val_loss: 0.5332 - val_mse: 0.5332 - val_mae: 0.6830 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0567 - val_loss: 0.5325 - val_mse: 0.5325 - val_mae: 0.6826 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0585 - val_loss: 0.5329 - val_mse: 0.5329 - val_mae: 0.6829 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0585 - val_loss: 0.5338 - val_mse: 0.5338 - val_mae: 0.6836 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0549 - val_loss: 0.5329 - val_mse: 0.5329 - val_mae: 0.6829 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0540 - val_loss: 0.5328 - val_mse: 0.5328 - val_mae: 0.6828 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0570 - val_loss: 0.5322 - val_mse: 0.5322 - val_mae: 0.6825 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0589 - val_loss: 0.5294 - val_mse: 0.5294 - val_mae: 0.6806 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0584 - val_loss: 0.5271 - val_mse: 0.5271 - val_mae: 0.6790 - lr: 1.0000e-05 - 204ms/epoch - 4ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0583 - val_loss: 0.5264 - val_mse: 0.5264 - val_mae: 0.6785 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0574 - val_loss: 0.5251 - val_mse: 0.5251 - val_mae: 0.6776 - lr: 1.0000e-05 - 204ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0565 - val_loss: 0.5230 - val_mse: 0.5230 - val_mae: 0.6762 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0583 - val_loss: 0.5216 - val_mse: 0.5216 - val_mae: 0.6753 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0564 - val_loss: 0.5218 - val_mse: 0.5218 - val_mae: 0.6754 - lr: 1.0000e-05 - 206ms/epoch - 4ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0580 - val_loss: 0.5229 - val_mse: 0.5229 - val_mae: 0.6762 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0586 - val_loss: 0.5221 - val_mse: 0.5221 - val_mae: 0.6757 - lr: 1.0000e-05 - 261ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0557 - val_loss: 0.5206 - val_mse: 0.5206 - val_mae: 0.6747 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0572 - val_loss: 0.5215 - val_mse: 0.5215 - val_mae: 0.6754 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0594 - val_loss: 0.5210 - val_mse: 0.5210 - val_mae: 0.6750 - lr: 1.0000e-05 - 249ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0574 - val_loss: 0.5210 - val_mse: 0.5210 - val_mae: 0.6751 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0553 - val_loss: 0.5204 - val_mse: 0.5204 - val_mae: 0.6747 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0561 - val_loss: 0.5199 - val_mse: 0.5199 - val_mae: 0.6744 - lr: 1.0000e-05 - 196ms/epoch - 4ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0585 - val_loss: 0.5201 - val_mse: 0.5201 - val_mae: 0.6746 - lr: 1.0000e-05 - 206ms/epoch - 4ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.47229
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0560 - val_loss: 0.5195 - val_mse: 0.5195 - val_mae: 0.6742 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 00053: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 133.6684179910327 
RMSE:	 11.561505870388714 
MAPE:	 10.289389775089397
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.778, Time=3.24 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.589, Time=4.62 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-14606.447, Time=5.96 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.589, Time=7.07 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15343.613, Time=9.68 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15047.583, Time=12.72 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16858.964, Time=11.56 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17024.022, Time=6.23 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-16998.618, Time=3.56 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17081.451, Time=6.80 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=17.00 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16997.990, Time=3.75 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-16992.667, Time=4.25 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 96.460 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.726
Date:                Sun, 12 Dec 2021   AIC                         -17081.451
Time:                        17:11:15   BIC                         -16945.417
Sample:                             0   HQIC                        -17029.208
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.316e-10   9.89e-05  -2.34e-06      1.000      -0.000       0.000
x2         -2.309e-10   9.88e-05  -2.34e-06      1.000      -0.000       0.000
x3         -2.325e-10   9.91e-05  -2.35e-06      1.000      -0.000       0.000
x4             1.0000    9.9e-05   1.01e+04      0.000       1.000       1.000
x5         -2.108e-10   9.43e-05  -2.24e-06      1.000      -0.000       0.000
x6         -7.997e-10      0.000  -4.63e-06      1.000      -0.000       0.000
x7         -2.295e-10   9.85e-05  -2.33e-06      1.000      -0.000       0.000
x8         -2.244e-10   9.74e-05   -2.3e-06      1.000      -0.000       0.000
x9         -1.166e-11   1.98e-05   -5.9e-07      1.000   -3.87e-05    3.87e-05
x10        -4.454e-11   4.19e-05  -1.06e-06      1.000   -8.22e-05    8.22e-05
x11        -2.219e-10   9.68e-05  -2.29e-06      1.000      -0.000       0.000
x12        -2.264e-10    9.8e-05  -2.31e-06      1.000      -0.000       0.000
x13        -2.315e-10   9.89e-05  -2.34e-06      1.000      -0.000       0.000
x14        -1.767e-09      0.000  -6.47e-06      1.000      -0.001       0.001
x15        -2.096e-10   9.38e-05  -2.23e-06      1.000      -0.000       0.000
x16        -5.257e-10      0.000   -3.5e-06      1.000      -0.000       0.000
x17        -2.143e-10   9.53e-05  -2.25e-06      1.000      -0.000       0.000
x18        -3.776e-11   3.61e-05  -1.05e-06      1.000   -7.08e-05    7.08e-05
x19         -2.52e-10      0.000  -2.41e-06      1.000      -0.000       0.000
x20        -2.417e-10   9.51e-05  -2.54e-06      1.000      -0.000       0.000
x21         -3.16e-09      0.000  -8.64e-06      1.000      -0.001       0.001
x22        -2.955e-09      0.000  -8.32e-06      1.000      -0.001       0.001
x23        -1.664e-09      0.000  -6.29e-06      1.000      -0.001       0.001
x24        -1.568e-09      0.000  -6.07e-06      1.000      -0.001       0.001
ar.L1         -0.4923    1.2e-09  -4.09e+08      0.000      -0.492      -0.492
ar.L2         -0.1923      7e-10  -2.75e+08      0.000      -0.192      -0.192
ar.L3         -0.0461   3.32e-10  -1.39e+08      0.000      -0.046      -0.046
ma.L1         -0.7077   2.73e-09  -2.59e+08      0.000      -0.708      -0.708
sigma2       8.99e-11   6.96e-11      1.291      0.197   -4.66e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.51   Jarque-Bera (JB):           4268313.90
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.44
Prob(H) (two-sided):                  0.00   Kurtosis:                       359.56
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.36e+28. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.14368, saving model to LSTM7.h5
16/16 - 2s - loss: 0.1485 - mse: 0.1485 - mae: 0.2985 - val_loss: 0.1437 - val_mse: 0.1437 - val_mae: 0.3319 - lr: 0.0010 - 2s/epoch - 145ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0519 - mse: 0.0519 - mae: 0.1969 - val_loss: 0.1677 - val_mse: 0.1677 - val_mae: 0.3644 - lr: 0.0010 - 77ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0194 - mse: 0.0194 - mae: 0.1138 - val_loss: 0.1986 - val_mse: 0.1986 - val_mae: 0.4015 - lr: 0.0010 - 78ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0142 - mse: 0.0142 - mae: 0.0937 - val_loss: 0.1694 - val_mse: 0.1694 - val_mae: 0.3635 - lr: 0.0010 - 94ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0140 - mse: 0.0140 - mae: 0.0926 - val_loss: 0.1751 - val_mse: 0.1751 - val_mae: 0.3703 - lr: 0.0010 - 77ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0867 - val_loss: 0.1673 - val_mse: 0.1673 - val_mae: 0.3603 - lr: 0.0010 - 91ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0796 - val_loss: 0.1668 - val_mse: 0.1668 - val_mae: 0.3596 - lr: 1.0000e-04 - 83ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0825 - val_loss: 0.1656 - val_mse: 0.1656 - val_mae: 0.3579 - lr: 1.0000e-04 - 93ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0837 - val_loss: 0.1643 - val_mse: 0.1643 - val_mae: 0.3561 - lr: 1.0000e-04 - 94ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0829 - val_loss: 0.1626 - val_mse: 0.1626 - val_mae: 0.3538 - lr: 1.0000e-04 - 97ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0824 - val_loss: 0.1614 - val_mse: 0.1614 - val_mae: 0.3522 - lr: 1.0000e-04 - 96ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0805 - val_loss: 0.1614 - val_mse: 0.1614 - val_mae: 0.3522 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0802 - val_loss: 0.1614 - val_mse: 0.1614 - val_mae: 0.3522 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0815 - val_loss: 0.1613 - val_mse: 0.1613 - val_mae: 0.3520 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0822 - val_loss: 0.1613 - val_mse: 0.1613 - val_mae: 0.3521 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0817 - val_loss: 0.1613 - val_mse: 0.1613 - val_mae: 0.3520 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0813 - val_loss: 0.1611 - val_mse: 0.1611 - val_mae: 0.3517 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0789 - val_loss: 0.1612 - val_mse: 0.1612 - val_mae: 0.3518 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0783 - val_loss: 0.1611 - val_mse: 0.1611 - val_mae: 0.3517 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0817 - val_loss: 0.1610 - val_mse: 0.1610 - val_mae: 0.3516 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0782 - val_loss: 0.1611 - val_mse: 0.1611 - val_mae: 0.3517 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0851 - val_loss: 0.1610 - val_mse: 0.1610 - val_mae: 0.3515 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0817 - val_loss: 0.1608 - val_mse: 0.1608 - val_mae: 0.3513 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0819 - val_loss: 0.1608 - val_mse: 0.1608 - val_mae: 0.3513 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0776 - val_loss: 0.1607 - val_mse: 0.1607 - val_mae: 0.3512 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0787 - val_loss: 0.1608 - val_mse: 0.1608 - val_mae: 0.3513 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0834 - val_loss: 0.1607 - val_mse: 0.1607 - val_mae: 0.3511 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0825 - val_loss: 0.1605 - val_mse: 0.1605 - val_mae: 0.3509 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0821 - val_loss: 0.1603 - val_mse: 0.1603 - val_mae: 0.3507 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0795 - val_loss: 0.1603 - val_mse: 0.1603 - val_mae: 0.3506 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0820 - val_loss: 0.1600 - val_mse: 0.1600 - val_mae: 0.3502 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0802 - val_loss: 0.1598 - val_mse: 0.1598 - val_mae: 0.3499 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0782 - val_loss: 0.1596 - val_mse: 0.1596 - val_mae: 0.3496 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0835 - val_loss: 0.1594 - val_mse: 0.1594 - val_mae: 0.3493 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0824 - val_loss: 0.1592 - val_mse: 0.1592 - val_mae: 0.3492 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0781 - val_loss: 0.1592 - val_mse: 0.1592 - val_mae: 0.3491 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0759 - val_loss: 0.1590 - val_mse: 0.1590 - val_mae: 0.3489 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0852 - val_loss: 0.1589 - val_mse: 0.1589 - val_mae: 0.3487 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0823 - val_loss: 0.1587 - val_mse: 0.1587 - val_mae: 0.3484 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0784 - val_loss: 0.1589 - val_mse: 0.1589 - val_mae: 0.3486 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0829 - val_loss: 0.1588 - val_mse: 0.1588 - val_mae: 0.3486 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0803 - val_loss: 0.1589 - val_mse: 0.1589 - val_mae: 0.3487 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0809 - val_loss: 0.1587 - val_mse: 0.1587 - val_mae: 0.3484 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0813 - val_loss: 0.1585 - val_mse: 0.1585 - val_mae: 0.3482 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0756 - val_loss: 0.1585 - val_mse: 0.1585 - val_mae: 0.3481 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0827 - val_loss: 0.1584 - val_mse: 0.1584 - val_mae: 0.3479 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0811 - val_loss: 0.1582 - val_mse: 0.1582 - val_mae: 0.3477 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0792 - val_loss: 0.1579 - val_mse: 0.1579 - val_mae: 0.3473 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0811 - val_loss: 0.1575 - val_mse: 0.1575 - val_mae: 0.3468 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0818 - val_loss: 0.1573 - val_mse: 0.1573 - val_mae: 0.3465 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.14368
16/16 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0798 - val_loss: 0.1572 - val_mse: 0.1572 - val_mae: 0.3463 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 133.6684179910327 
RMSE:	 11.561505870388714 
MAPE:	 10.289389775089397

EMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 34.52931938659473 
RMSE:	 5.876165364129462 
MAPE:	 4.852473639818076
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.780, Time=2.89 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.589, Time=4.60 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16789.784, Time=12.16 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.589, Time=7.24 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16919.987, Time=9.55 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-14616.097, Time=12.57 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17225.955, Time=18.58 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.589, Time=9.75 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-15582.364, Time=19.25 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-12043.670, Time=36.40 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 133.001 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8639.977
Date:                Sun, 12 Dec 2021   AIC                         -17225.955
Time:                        17:22:29   BIC                         -17099.302
Sample:                             0   HQIC                        -17177.315
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -4.802e-09   4.51e-06     -0.001      0.999   -8.84e-06    8.83e-06
x2         -4.783e-09    4.5e-06     -0.001      0.999   -8.83e-06    8.82e-06
x3         -4.811e-09   4.51e-06     -0.001      0.999   -8.85e-06    8.84e-06
x4             1.0000   4.51e-06   2.22e+05      0.000       1.000       1.000
x5         -4.353e-09    4.3e-06     -0.001      0.999   -8.43e-06    8.42e-06
x6         -1.569e-08   7.54e-06     -0.002      0.998   -1.48e-05    1.48e-05
x7          -4.75e-09   4.49e-06     -0.001      0.999    -8.8e-06    8.79e-06
x8         -4.628e-09   4.43e-06     -0.001      0.999   -8.69e-06    8.69e-06
x9         -4.733e-10   1.16e-06     -0.000      1.000   -2.27e-06    2.27e-06
x10         -7.88e-10    1.8e-06     -0.000      1.000   -3.52e-06    3.52e-06
x11        -4.609e-09   4.42e-06     -0.001      0.999   -8.68e-06    8.67e-06
x12        -4.607e-09   4.42e-06     -0.001      0.999   -8.68e-06    8.67e-06
x13        -4.792e-09   4.51e-06     -0.001      0.999   -8.84e-06    8.83e-06
x14        -3.777e-08   1.24e-05     -0.003      0.998   -2.44e-05    2.44e-05
x15         -3.99e-09   4.12e-06     -0.001      0.999   -8.08e-06    8.07e-06
x16        -1.309e-08   7.41e-06     -0.002      0.999   -1.45e-05    1.45e-05
x17        -4.789e-09   4.51e-06     -0.001      0.999   -8.85e-06    8.84e-06
x18        -2.665e-10   9.77e-07     -0.000      1.000   -1.92e-06    1.92e-06
x19        -4.919e-09   4.56e-06     -0.001      0.999   -8.94e-06    8.93e-06
x20            -4e-10   9.58e-07     -0.000      1.000   -1.88e-06    1.88e-06
x21        -6.782e-08   1.67e-05     -0.004      0.997   -3.27e-05    3.26e-05
x22         -6.03e-08   1.58e-05     -0.004      0.997   -3.09e-05    3.08e-05
x23        -3.157e-08   1.14e-05     -0.003      0.998   -2.23e-05    2.23e-05
x24        -3.671e-08   1.23e-05     -0.003      0.998   -2.41e-05    2.41e-05
ma.L1         -1.3901   5.58e-10  -2.49e+09      0.000      -1.390      -1.390
ma.L2          0.4033   5.75e-10   7.02e+08      0.000       0.403       0.403
sigma2      7.525e-11   6.92e-11      1.088      0.277   -6.03e-11    2.11e-10
===================================================================================
Ljung-Box (L1) (Q):                  69.18   Jarque-Bera (JB):           6366427.21
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            12.29
Prob(H) (two-sided):                  0.00   Kurtosis:                       437.97
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.29e+25. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.07482, saving model to LSTM7.h5
17/17 - 3s - loss: 0.0780 - mse: 0.0780 - mae: 0.2303 - val_loss: 0.0748 - val_mse: 0.0748 - val_mae: 0.2408 - lr: 0.0010 - 3s/epoch - 159ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.07482 to 0.06158, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0356 - mse: 0.0356 - mae: 0.1480 - val_loss: 0.0616 - val_mse: 0.0616 - val_mae: 0.2091 - lr: 0.0010 - 91ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0187 - mse: 0.0187 - mae: 0.1092 - val_loss: 0.0764 - val_mse: 0.0764 - val_mae: 0.2089 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0161 - mse: 0.0161 - mae: 0.1015 - val_loss: 0.0864 - val_mse: 0.0864 - val_mae: 0.2191 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0148 - mse: 0.0148 - mae: 0.0954 - val_loss: 0.0902 - val_mse: 0.0902 - val_mae: 0.2231 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0835 - val_loss: 0.1079 - val_mse: 0.1079 - val_mae: 0.2441 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0798 - val_loss: 0.1203 - val_mse: 0.1203 - val_mae: 0.2608 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0778 - val_loss: 0.1177 - val_mse: 0.1177 - val_mae: 0.2571 - lr: 1.0000e-04 - 90ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0799 - val_loss: 0.1162 - val_mse: 0.1162 - val_mae: 0.2550 - lr: 1.0000e-04 - 84ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0763 - val_loss: 0.1157 - val_mse: 0.1157 - val_mae: 0.2544 - lr: 1.0000e-04 - 83ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0765 - val_loss: 0.1159 - val_mse: 0.1159 - val_mae: 0.2546 - lr: 1.0000e-04 - 95ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0749 - val_loss: 0.1178 - val_mse: 0.1178 - val_mae: 0.2573 - lr: 1.0000e-04 - 91ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0771 - val_loss: 0.1178 - val_mse: 0.1178 - val_mae: 0.2573 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0754 - val_loss: 0.1180 - val_mse: 0.1180 - val_mae: 0.2575 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0782 - val_loss: 0.1178 - val_mse: 0.1178 - val_mae: 0.2573 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0743 - val_loss: 0.1177 - val_mse: 0.1177 - val_mae: 0.2571 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0744 - val_loss: 0.1176 - val_mse: 0.1176 - val_mae: 0.2570 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0713 - val_loss: 0.1176 - val_mse: 0.1176 - val_mae: 0.2570 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0713 - val_loss: 0.1175 - val_mse: 0.1175 - val_mae: 0.2569 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0743 - val_loss: 0.1175 - val_mse: 0.1175 - val_mae: 0.2569 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0741 - val_loss: 0.1176 - val_mse: 0.1176 - val_mae: 0.2570 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0756 - val_loss: 0.1177 - val_mse: 0.1177 - val_mae: 0.2571 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0749 - val_loss: 0.1179 - val_mse: 0.1179 - val_mae: 0.2575 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0780 - val_loss: 0.1182 - val_mse: 0.1182 - val_mae: 0.2578 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0726 - val_loss: 0.1183 - val_mse: 0.1183 - val_mae: 0.2580 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0724 - val_loss: 0.1183 - val_mse: 0.1183 - val_mae: 0.2580 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0747 - val_loss: 0.1184 - val_mse: 0.1184 - val_mae: 0.2581 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0738 - val_loss: 0.1183 - val_mse: 0.1183 - val_mae: 0.2580 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0737 - val_loss: 0.1185 - val_mse: 0.1185 - val_mae: 0.2583 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0751 - val_loss: 0.1183 - val_mse: 0.1183 - val_mae: 0.2580 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0750 - val_loss: 0.1183 - val_mse: 0.1183 - val_mae: 0.2581 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0762 - val_loss: 0.1184 - val_mse: 0.1184 - val_mae: 0.2581 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0751 - val_loss: 0.1184 - val_mse: 0.1184 - val_mae: 0.2582 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0715 - val_loss: 0.1186 - val_mse: 0.1186 - val_mae: 0.2585 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0715 - val_loss: 0.1187 - val_mse: 0.1187 - val_mae: 0.2586 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0748 - val_loss: 0.1187 - val_mse: 0.1187 - val_mae: 0.2587 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0719 - val_loss: 0.1188 - val_mse: 0.1188 - val_mae: 0.2588 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0743 - val_loss: 0.1188 - val_mse: 0.1188 - val_mae: 0.2589 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0710 - val_loss: 0.1189 - val_mse: 0.1189 - val_mae: 0.2589 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0744 - val_loss: 0.1188 - val_mse: 0.1188 - val_mae: 0.2588 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0730 - val_loss: 0.1186 - val_mse: 0.1186 - val_mae: 0.2585 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0733 - val_loss: 0.1186 - val_mse: 0.1186 - val_mae: 0.2585 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0738 - val_loss: 0.1186 - val_mse: 0.1186 - val_mae: 0.2586 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0711 - val_loss: 0.1190 - val_mse: 0.1190 - val_mae: 0.2591 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0742 - val_loss: 0.1188 - val_mse: 0.1188 - val_mae: 0.2589 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0755 - val_loss: 0.1187 - val_mse: 0.1187 - val_mae: 0.2587 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0695 - val_loss: 0.1188 - val_mse: 0.1188 - val_mae: 0.2588 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0744 - val_loss: 0.1189 - val_mse: 0.1189 - val_mae: 0.2590 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0708 - val_loss: 0.1194 - val_mse: 0.1194 - val_mae: 0.2597 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0723 - val_loss: 0.1195 - val_mse: 0.1195 - val_mae: 0.2598 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0725 - val_loss: 0.1197 - val_mse: 0.1197 - val_mae: 0.2601 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.06158
17/17 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0720 - val_loss: 0.1199 - val_mse: 0.1199 - val_mae: 0.2605 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 133.6684179910327 
RMSE:	 11.561505870388714 
MAPE:	 10.289389775089397

EMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 34.52931938659473 
RMSE:	 5.876165364129462 
MAPE:	 4.852473639818076

WMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.74468487644727 
RMSE:	 6.061739426637149 
MAPE:	 4.85767480758183
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.785, Time=3.06 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.588, Time=4.53 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15575.689, Time=8.72 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.588, Time=6.95 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16714.796, Time=8.85 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15610.140, Time=10.53 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17225.835, Time=22.19 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.588, Time=9.30 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16751.951, Time=20.80 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-11788.089, Time=30.48 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 125.433 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8639.917
Date:                Sun, 12 Dec 2021   AIC                         -17225.835
Time:                        17:29:03   BIC                         -17099.182
Sample:                             0   HQIC                        -17177.195
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.894e-09   3.61e-06     -0.002      0.999   -7.09e-06    7.08e-06
x2          -5.93e-09   3.63e-06     -0.002      0.999   -7.11e-06     7.1e-06
x3         -5.905e-09   3.62e-06     -0.002      0.999    -7.1e-06    7.09e-06
x4             1.0000   3.62e-06   2.76e+05      0.000       1.000       1.000
x5         -5.457e-09   3.48e-06     -0.002      0.999   -6.83e-06    6.82e-06
x6         -3.019e-08   7.72e-06     -0.004      0.997   -1.52e-05    1.51e-05
x7          -5.87e-09   3.61e-06     -0.002      0.999   -7.08e-06    7.07e-06
x8         -5.809e-09   3.59e-06     -0.002      0.999   -7.05e-06    7.04e-06
x9         -9.293e-11   9.83e-08     -0.001      0.999   -1.93e-07    1.93e-07
x10        -2.793e-09   2.47e-06     -0.001      0.999   -4.84e-06    4.84e-06
x11        -6.095e-09   3.68e-06     -0.002      0.999   -7.21e-06     7.2e-06
x12        -5.478e-09   3.49e-06     -0.002      0.999   -6.85e-06    6.84e-06
x13         -5.91e-09   3.62e-06     -0.002      0.999    -7.1e-06    7.09e-06
x14        -4.085e-08   9.35e-06     -0.004      0.997   -1.84e-05    1.83e-05
x15         -5.93e-09   3.63e-06     -0.002      0.999   -7.12e-06    7.11e-06
x16        -1.618e-09   1.92e-06     -0.001      0.999   -3.76e-06    3.75e-06
x17        -5.076e-09   3.37e-06     -0.002      0.999    -6.6e-06    6.59e-06
x18        -1.377e-08    5.5e-06     -0.003      0.998   -1.08e-05    1.08e-05
x19        -6.135e-09   3.69e-06     -0.002      0.999   -7.23e-06    7.22e-06
x20        -1.018e-08   4.43e-06     -0.002      0.998   -8.68e-06    8.66e-06
x21        -6.911e-08   1.21e-05     -0.006      0.995   -2.39e-05    2.37e-05
x22        -5.656e-08    1.1e-05     -0.005      0.996   -2.16e-05    2.15e-05
x23        -5.355e-08   1.07e-05     -0.005      0.996    -2.1e-05    2.09e-05
x24        -3.636e-08   8.85e-06     -0.004      0.997   -1.74e-05    1.73e-05
ma.L1         -1.3899   4.86e-11  -2.86e+10      0.000      -1.390      -1.390
ma.L2          0.4032    4.6e-11   8.76e+09      0.000       0.403       0.403
sigma2      7.526e-11   6.92e-11      1.088      0.277   -6.03e-11    2.11e-10
===================================================================================
Ljung-Box (L1) (Q):                  69.65   Jarque-Bera (JB):           6422892.15
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            12.42
Prob(H) (two-sided):                  0.00   Kurtosis:                       439.89
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.74e+29. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.70173, saving model to LSTM7.h5
10/10 - 2s - loss: 0.8879 - mse: 0.8879 - mae: 0.7859 - val_loss: 0.7017 - val_mse: 0.7017 - val_mae: 0.8156 - lr: 0.0010 - 2s/epoch - 230ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.70173 to 0.38309, saving model to LSTM7.h5
10/10 - 0s - loss: 0.1203 - mse: 0.1203 - mae: 0.2714 - val_loss: 0.3831 - val_mse: 0.3831 - val_mae: 0.5890 - lr: 0.0010 - 67ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.38309 to 0.27460, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0877 - mse: 0.0877 - mae: 0.2461 - val_loss: 0.2746 - val_mse: 0.2746 - val_mae: 0.4891 - lr: 0.0010 - 84ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.27460 to 0.24714, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0776 - mse: 0.0776 - mae: 0.2356 - val_loss: 0.2471 - val_mse: 0.2471 - val_mae: 0.4620 - lr: 0.0010 - 70ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.24714 to 0.23605, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0446 - mse: 0.0446 - mae: 0.1703 - val_loss: 0.2360 - val_mse: 0.2360 - val_mae: 0.4519 - lr: 0.0010 - 82ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.23605 to 0.20789, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0371 - mse: 0.0371 - mae: 0.1529 - val_loss: 0.2079 - val_mse: 0.2079 - val_mae: 0.4212 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.20789 to 0.17846, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0307 - mse: 0.0307 - mae: 0.1320 - val_loss: 0.1785 - val_mse: 0.1785 - val_mae: 0.3859 - lr: 0.0010 - 68ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.17846 to 0.15659, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0258 - mse: 0.0258 - mae: 0.1266 - val_loss: 0.1566 - val_mse: 0.1566 - val_mae: 0.3577 - lr: 0.0010 - 80ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.15659 to 0.14034, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0228 - mse: 0.0228 - mae: 0.1191 - val_loss: 0.1403 - val_mse: 0.1403 - val_mae: 0.3356 - lr: 0.0010 - 102ms/epoch - 10ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.14034 to 0.13114, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0209 - mse: 0.0209 - mae: 0.1142 - val_loss: 0.1311 - val_mse: 0.1311 - val_mae: 0.3230 - lr: 0.0010 - 73ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.13114 to 0.11841, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0208 - mse: 0.0208 - mae: 0.1138 - val_loss: 0.1184 - val_mse: 0.1184 - val_mae: 0.3043 - lr: 0.0010 - 73ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.11841 to 0.11155, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0183 - mse: 0.0183 - mae: 0.1071 - val_loss: 0.1116 - val_mse: 0.1116 - val_mae: 0.2942 - lr: 0.0010 - 69ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.11155 to 0.10249, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0157 - mse: 0.0157 - mae: 0.0967 - val_loss: 0.1025 - val_mse: 0.1025 - val_mae: 0.2799 - lr: 0.0010 - 80ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.10249 to 0.09636, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0165 - mse: 0.0165 - mae: 0.1001 - val_loss: 0.0964 - val_mse: 0.0964 - val_mae: 0.2701 - lr: 0.0010 - 88ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.09636 to 0.09416, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0953 - val_loss: 0.0942 - val_mse: 0.0942 - val_mae: 0.2671 - lr: 0.0010 - 71ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.09416 to 0.08719, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0129 - mse: 0.0129 - mae: 0.0894 - val_loss: 0.0872 - val_mse: 0.0872 - val_mae: 0.2548 - lr: 0.0010 - 77ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.08719 to 0.08293, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0148 - mse: 0.0148 - mae: 0.0946 - val_loss: 0.0829 - val_mse: 0.0829 - val_mae: 0.2473 - lr: 0.0010 - 78ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.08293 to 0.07710, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0130 - mse: 0.0130 - mae: 0.0895 - val_loss: 0.0771 - val_mse: 0.0771 - val_mae: 0.2366 - lr: 0.0010 - 76ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.07710 to 0.07451, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0873 - val_loss: 0.0745 - val_mse: 0.0745 - val_mae: 0.2321 - lr: 0.0010 - 106ms/epoch - 11ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.07451 to 0.07246, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0121 - mse: 0.0121 - mae: 0.0832 - val_loss: 0.0725 - val_mse: 0.0725 - val_mae: 0.2289 - lr: 0.0010 - 91ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.07246 to 0.07118, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0837 - val_loss: 0.0712 - val_mse: 0.0712 - val_mae: 0.2270 - lr: 0.0010 - 94ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.07118
10/10 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0839 - val_loss: 0.0744 - val_mse: 0.0744 - val_mae: 0.2343 - lr: 0.0010 - 80ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.07118
10/10 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0829 - val_loss: 0.0728 - val_mse: 0.0728 - val_mae: 0.2317 - lr: 0.0010 - 62ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.07118
10/10 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0803 - val_loss: 0.0718 - val_mse: 0.0718 - val_mae: 0.2301 - lr: 0.0010 - 79ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss improved from 0.07118 to 0.06944, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0796 - val_loss: 0.0694 - val_mse: 0.0694 - val_mae: 0.2258 - lr: 0.0010 - 80ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss improved from 0.06944 to 0.06827, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0749 - val_loss: 0.0683 - val_mse: 0.0683 - val_mae: 0.2238 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss improved from 0.06827 to 0.06524, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0726 - val_loss: 0.0652 - val_mse: 0.0652 - val_mae: 0.2178 - lr: 0.0010 - 85ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss improved from 0.06524 to 0.06351, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0749 - val_loss: 0.0635 - val_mse: 0.0635 - val_mae: 0.2144 - lr: 0.0010 - 75ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.06351
10/10 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0758 - val_loss: 0.0643 - val_mse: 0.0643 - val_mae: 0.2168 - lr: 0.0010 - 59ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss improved from 0.06351 to 0.06295, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0732 - val_loss: 0.0630 - val_mse: 0.0630 - val_mae: 0.2142 - lr: 0.0010 - 87ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss improved from 0.06295 to 0.05798, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0703 - val_loss: 0.0580 - val_mse: 0.0580 - val_mae: 0.2031 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss improved from 0.05798 to 0.05263, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0692 - val_loss: 0.0526 - val_mse: 0.0526 - val_mae: 0.1907 - lr: 0.0010 - 81ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss improved from 0.05263 to 0.05221, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0684 - val_loss: 0.0522 - val_mse: 0.0522 - val_mae: 0.1899 - lr: 0.0010 - 79ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0678 - val_loss: 0.0534 - val_mse: 0.0534 - val_mae: 0.1927 - lr: 0.0010 - 63ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0672 - val_loss: 0.0552 - val_mse: 0.0552 - val_mae: 0.1972 - lr: 0.0010 - 60ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0650 - val_loss: 0.0598 - val_mse: 0.0598 - val_mae: 0.2083 - lr: 0.0010 - 55ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0629 - val_loss: 0.0640 - val_mse: 0.0640 - val_mae: 0.2176 - lr: 0.0010 - 75ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00038: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0655 - val_loss: 0.0587 - val_mse: 0.0587 - val_mae: 0.2060 - lr: 0.0010 - 69ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0646 - val_loss: 0.0583 - val_mse: 0.0583 - val_mae: 0.2049 - lr: 1.0000e-04 - 72ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0627 - val_loss: 0.0581 - val_mse: 0.0581 - val_mae: 0.2044 - lr: 1.0000e-04 - 71ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0602 - val_loss: 0.0578 - val_mse: 0.0578 - val_mae: 0.2037 - lr: 1.0000e-04 - 71ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0651 - val_loss: 0.0577 - val_mse: 0.0577 - val_mae: 0.2037 - lr: 1.0000e-04 - 60ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00043: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0607 - val_loss: 0.0576 - val_mse: 0.0576 - val_mae: 0.2034 - lr: 1.0000e-04 - 59ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0610 - val_loss: 0.0576 - val_mse: 0.0576 - val_mae: 0.2033 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0612 - val_loss: 0.0575 - val_mse: 0.0575 - val_mae: 0.2032 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0629 - val_loss: 0.0575 - val_mse: 0.0575 - val_mae: 0.2031 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0627 - val_loss: 0.0574 - val_mse: 0.0574 - val_mae: 0.2029 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00048: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0622 - val_loss: 0.0574 - val_mse: 0.0574 - val_mae: 0.2029 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0601 - val_loss: 0.0574 - val_mse: 0.0574 - val_mae: 0.2029 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0606 - val_loss: 0.0574 - val_mse: 0.0574 - val_mae: 0.2029 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0616 - val_loss: 0.0575 - val_mse: 0.0575 - val_mae: 0.2031 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0611 - val_loss: 0.0575 - val_mse: 0.0575 - val_mae: 0.2031 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0645 - val_loss: 0.0574 - val_mse: 0.0574 - val_mae: 0.2030 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0612 - val_loss: 0.0575 - val_mse: 0.0575 - val_mae: 0.2030 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0627 - val_loss: 0.0575 - val_mse: 0.0575 - val_mae: 0.2031 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0642 - val_loss: 0.0575 - val_mse: 0.0575 - val_mae: 0.2031 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0654 - val_loss: 0.0575 - val_mse: 0.0575 - val_mae: 0.2031 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0643 - val_loss: 0.0574 - val_mse: 0.0574 - val_mae: 0.2030 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0596 - val_loss: 0.0574 - val_mse: 0.0574 - val_mae: 0.2029 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0645 - val_loss: 0.0573 - val_mse: 0.0573 - val_mae: 0.2027 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0627 - val_loss: 0.0573 - val_mse: 0.0573 - val_mae: 0.2027 - lr: 1.0000e-05 - 110ms/epoch - 11ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0625 - val_loss: 0.0573 - val_mse: 0.0573 - val_mae: 0.2027 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0618 - val_loss: 0.0573 - val_mse: 0.0573 - val_mae: 0.2026 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0616 - val_loss: 0.0572 - val_mse: 0.0572 - val_mae: 0.2026 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0632 - val_loss: 0.0572 - val_mse: 0.0572 - val_mae: 0.2025 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0606 - val_loss: 0.0572 - val_mse: 0.0572 - val_mae: 0.2024 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0628 - val_loss: 0.0571 - val_mse: 0.0571 - val_mae: 0.2023 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0620 - val_loss: 0.0571 - val_mse: 0.0571 - val_mae: 0.2023 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0634 - val_loss: 0.0571 - val_mse: 0.0571 - val_mae: 0.2023 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0654 - val_loss: 0.0571 - val_mse: 0.0571 - val_mae: 0.2022 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0641 - val_loss: 0.0571 - val_mse: 0.0571 - val_mae: 0.2022 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0610 - val_loss: 0.0571 - val_mse: 0.0571 - val_mae: 0.2022 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0632 - val_loss: 0.0571 - val_mse: 0.0571 - val_mae: 0.2023 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0620 - val_loss: 0.0571 - val_mse: 0.0571 - val_mae: 0.2023 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0613 - val_loss: 0.0570 - val_mse: 0.0570 - val_mae: 0.2020 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0625 - val_loss: 0.0570 - val_mse: 0.0570 - val_mae: 0.2020 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0630 - val_loss: 0.0569 - val_mse: 0.0569 - val_mae: 0.2018 - lr: 1.0000e-05 - 75ms/epoch - 8ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0609 - val_loss: 0.0569 - val_mse: 0.0569 - val_mae: 0.2017 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0631 - val_loss: 0.0569 - val_mse: 0.0569 - val_mae: 0.2017 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0630 - val_loss: 0.0568 - val_mse: 0.0568 - val_mae: 0.2017 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0604 - val_loss: 0.0569 - val_mse: 0.0569 - val_mae: 0.2017 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0610 - val_loss: 0.0570 - val_mse: 0.0570 - val_mae: 0.2019 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.05221
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0623 - val_loss: 0.0570 - val_mse: 0.0570 - val_mae: 0.2020 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 00083: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 133.6684179910327 
RMSE:	 11.561505870388714 
MAPE:	 10.289389775089397

EMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 34.52931938659473 
RMSE:	 5.876165364129462 
MAPE:	 4.852473639818076

WMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.74468487644727 
RMSE:	 6.061739426637149 
MAPE:	 4.85767480758183

DEMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 87.45802496937279 
RMSE:	 9.351899538028238 
MAPE:	 8.239361009856534
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16921.943, Time=10.66 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.592, Time=4.78 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16797.275, Time=9.36 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.592, Time=6.82 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16996.465, Time=3.24 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16999.509, Time=3.22 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17171.315, Time=6.35 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-16994.523, Time=4.16 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=-15518.026, Time=30.90 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 79.512 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood                8613.658
Date:                Sun, 12 Dec 2021   AIC                         -17171.315
Time:                        17:34:56   BIC                         -17039.972
Sample:                             0   HQIC                        -17120.874
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          -5.14e-10    7.6e-05  -6.76e-06      1.000      -0.000       0.000
x2         -5.041e-10   7.52e-05   -6.7e-06      1.000      -0.000       0.000
x3         -4.834e-10   7.38e-05  -6.55e-06      1.000      -0.000       0.000
x4             1.0000   7.46e-05   1.34e+04      0.000       1.000       1.000
x5         -4.462e-10   7.09e-05  -6.29e-06      1.000      -0.000       0.000
x6         -3.064e-09      0.000  -1.84e-05      1.000      -0.000       0.000
x7         -4.751e-10   7.35e-05  -6.46e-06      1.000      -0.000       0.000
x8         -4.628e-10   7.28e-05  -6.36e-06      1.000      -0.000       0.000
x9          -9.21e-11   9.37e-06  -9.83e-06      1.000   -1.84e-05    1.84e-05
x10        -2.165e-10    3.1e-05  -6.98e-06      1.000   -6.08e-05    6.08e-05
x11        -4.665e-10   7.28e-05  -6.41e-06      1.000      -0.000       0.000
x12         -4.62e-10   7.23e-05  -6.39e-06      1.000      -0.000       0.000
x13        -4.906e-10   7.43e-05   -6.6e-06      1.000      -0.000       0.000
x14        -3.985e-09      0.000  -1.87e-05      1.000      -0.000       0.000
x15        -4.897e-10   7.48e-05  -6.55e-06      1.000      -0.000       0.000
x16        -7.327e-10   9.24e-05  -7.93e-06      1.000      -0.000       0.000
x17        -4.173e-10   6.93e-05  -6.02e-06      1.000      -0.000       0.000
x18        -3.397e-10   6.02e-05  -5.64e-06      1.000      -0.000       0.000
x19        -6.012e-10    8.3e-05  -7.25e-06      1.000      -0.000       0.000
x20         -9.09e-10      0.000  -9.05e-06      1.000      -0.000       0.000
x21        -6.188e-09      0.000  -2.32e-05      1.000      -0.001       0.001
x22        -1.992e-09      0.000  -1.33e-05      1.000      -0.000       0.000
x23        -3.669e-09      0.000  -1.79e-05      1.000      -0.000       0.000
x24        -1.065e-09      0.000  -1.01e-05      1.000      -0.000       0.000
ar.L1         -1.2073   5.73e-10  -2.11e+09      0.000      -1.207      -1.207
ar.L2         -0.9083   5.93e-10  -1.53e+09      0.000      -0.908      -0.908
ar.L3         -0.4033   5.84e-10  -6.91e+08      0.000      -0.403      -0.403
sigma2       8.06e-11   6.94e-11      1.162      0.245   -5.54e-11    2.17e-10
===================================================================================
Ljung-Box (L1) (Q):                  13.77   Jarque-Bera (JB):           2436796.68
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             4.07
Prob(H) (two-sided):                  0.00   Kurtosis:                       272.41
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.23e+28. Standard errors may be unstable.
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04274, saving model to LSTM7.h5
45/45 - 2s - loss: 0.2395 - mse: 0.2395 - mae: 0.3786 - val_loss: 0.0427 - val_mse: 0.0427 - val_mae: 0.1818 - lr: 0.0010 - 2s/epoch - 54ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0676 - mse: 0.0676 - mae: 0.1991 - val_loss: 0.0669 - val_mse: 0.0669 - val_mae: 0.2104 - lr: 0.0010 - 206ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0224 - mse: 0.0224 - mae: 0.1167 - val_loss: 0.1187 - val_mse: 0.1187 - val_mae: 0.2940 - lr: 0.0010 - 210ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0132 - mse: 0.0132 - mae: 0.0890 - val_loss: 0.0991 - val_mse: 0.0991 - val_mae: 0.2636 - lr: 0.0010 - 251ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0873 - val_loss: 0.1059 - val_mse: 0.1059 - val_mae: 0.2774 - lr: 0.0010 - 222ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0811 - val_loss: 0.0999 - val_mse: 0.0999 - val_mae: 0.2688 - lr: 0.0010 - 234ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0762 - val_loss: 0.0948 - val_mse: 0.0948 - val_mae: 0.2601 - lr: 1.0000e-04 - 200ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0770 - val_loss: 0.0936 - val_mse: 0.0936 - val_mae: 0.2585 - lr: 1.0000e-04 - 210ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0751 - val_loss: 0.0945 - val_mse: 0.0945 - val_mae: 0.2603 - lr: 1.0000e-04 - 222ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0757 - val_loss: 0.0926 - val_mse: 0.0926 - val_mae: 0.2574 - lr: 1.0000e-04 - 208ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0741 - val_loss: 0.0918 - val_mse: 0.0918 - val_mae: 0.2563 - lr: 1.0000e-04 - 202ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0738 - val_loss: 0.0919 - val_mse: 0.0919 - val_mae: 0.2566 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0704 - val_loss: 0.0915 - val_mse: 0.0915 - val_mae: 0.2559 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0737 - val_loss: 0.0916 - val_mse: 0.0916 - val_mae: 0.2561 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0721 - val_loss: 0.0914 - val_mse: 0.0914 - val_mae: 0.2558 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0744 - val_loss: 0.0912 - val_mse: 0.0912 - val_mae: 0.2553 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0759 - val_loss: 0.0911 - val_mse: 0.0911 - val_mae: 0.2553 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0759 - val_loss: 0.0905 - val_mse: 0.0905 - val_mae: 0.2542 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0743 - val_loss: 0.0906 - val_mse: 0.0906 - val_mae: 0.2545 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0733 - val_loss: 0.0906 - val_mse: 0.0906 - val_mae: 0.2544 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0746 - val_loss: 0.0902 - val_mse: 0.0902 - val_mae: 0.2538 - lr: 1.0000e-05 - 201ms/epoch - 4ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0737 - val_loss: 0.0905 - val_mse: 0.0905 - val_mae: 0.2544 - lr: 1.0000e-05 - 188ms/epoch - 4ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0746 - val_loss: 0.0904 - val_mse: 0.0904 - val_mae: 0.2542 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0735 - val_loss: 0.0903 - val_mse: 0.0903 - val_mae: 0.2542 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0751 - val_loss: 0.0902 - val_mse: 0.0902 - val_mae: 0.2541 - lr: 1.0000e-05 - 199ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0743 - val_loss: 0.0903 - val_mse: 0.0903 - val_mae: 0.2543 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0710 - val_loss: 0.0902 - val_mse: 0.0902 - val_mae: 0.2540 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0725 - val_loss: 0.0905 - val_mse: 0.0905 - val_mae: 0.2548 - lr: 1.0000e-05 - 198ms/epoch - 4ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0750 - val_loss: 0.0907 - val_mse: 0.0907 - val_mae: 0.2550 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0749 - val_loss: 0.0906 - val_mse: 0.0906 - val_mae: 0.2549 - lr: 1.0000e-05 - 198ms/epoch - 4ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0719 - val_loss: 0.0905 - val_mse: 0.0905 - val_mae: 0.2548 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0715 - val_loss: 0.0900 - val_mse: 0.0900 - val_mae: 0.2540 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0734 - val_loss: 0.0898 - val_mse: 0.0898 - val_mae: 0.2536 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0748 - val_loss: 0.0895 - val_mse: 0.0895 - val_mae: 0.2532 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0751 - val_loss: 0.0893 - val_mse: 0.0893 - val_mae: 0.2529 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0697 - val_loss: 0.0889 - val_mse: 0.0889 - val_mae: 0.2522 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0723 - val_loss: 0.0892 - val_mse: 0.0892 - val_mae: 0.2528 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0693 - val_loss: 0.0892 - val_mse: 0.0892 - val_mae: 0.2527 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0693 - val_loss: 0.0892 - val_mse: 0.0892 - val_mae: 0.2528 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0729 - val_loss: 0.0890 - val_mse: 0.0890 - val_mae: 0.2525 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0701 - val_loss: 0.0892 - val_mse: 0.0892 - val_mae: 0.2528 - lr: 1.0000e-05 - 202ms/epoch - 4ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0718 - val_loss: 0.0889 - val_mse: 0.0889 - val_mae: 0.2524 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0707 - val_loss: 0.0887 - val_mse: 0.0887 - val_mae: 0.2521 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0721 - val_loss: 0.0886 - val_mse: 0.0886 - val_mae: 0.2520 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0694 - val_loss: 0.0885 - val_mse: 0.0885 - val_mae: 0.2518 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0718 - val_loss: 0.0891 - val_mse: 0.0891 - val_mae: 0.2530 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0703 - val_loss: 0.0894 - val_mse: 0.0894 - val_mae: 0.2535 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0704 - val_loss: 0.0891 - val_mse: 0.0891 - val_mae: 0.2531 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0730 - val_loss: 0.0890 - val_mse: 0.0890 - val_mae: 0.2530 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0703 - val_loss: 0.0892 - val_mse: 0.0892 - val_mae: 0.2532 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04274
45/45 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0697 - val_loss: 0.0892 - val_mse: 0.0892 - val_mae: 0.2533 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 133.6684179910327 
RMSE:	 11.561505870388714 
MAPE:	 10.289389775089397

EMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 34.52931938659473 
RMSE:	 5.876165364129462 
MAPE:	 4.852473639818076

WMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.74468487644727 
RMSE:	 6.061739426637149 
MAPE:	 4.85767480758183

DEMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 87.45802496937279 
RMSE:	 9.351899538028238 
MAPE:	 8.239361009856534

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 55.655998887145394 
RMSE:	 7.460294825752223 
MAPE:	 6.325008398714769
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.768, Time=3.42 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.591, Time=4.75 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15581.065, Time=8.84 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.591, Time=7.41 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16536.628, Time=9.30 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-13971.493, Time=10.19 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17226.044, Time=21.94 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.591, Time=9.57 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16754.945, Time=20.60 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-15001.855, Time=20.53 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 116.545 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8640.022
Date:                Sun, 12 Dec 2021   AIC                         -17226.044
Time:                        17:38:50   BIC                         -17099.391
Sample:                             0   HQIC                        -17177.404
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.031e-09   1.06e-05     -0.000      1.000   -2.08e-05    2.08e-05
x2          -4.99e-09   8.12e-06     -0.001      1.000   -1.59e-05    1.59e-05
x3         -5.114e-09   1.38e-05     -0.000      1.000    -2.7e-05     2.7e-05
x4             1.0000   8.91e-06   1.12e+05      0.000       1.000       1.000
x5          -4.55e-09    8.2e-06     -0.001      1.000   -1.61e-05    1.61e-05
x6         -9.992e-08      0.001     -0.000      1.000      -0.002       0.002
x7         -4.607e-09   1.97e-05     -0.000      1.000   -3.86e-05    3.86e-05
x8         -4.591e-09   1.77e-05     -0.000      1.000   -3.48e-05    3.48e-05
x9         -2.538e-09   1.13e-05     -0.000      1.000   -2.21e-05    2.21e-05
x10        -4.315e-09   6.08e-06     -0.001      0.999   -1.19e-05    1.19e-05
x11        -4.545e-09   1.62e-05     -0.000      1.000   -3.18e-05    3.18e-05
x12        -4.701e-09   1.97e-05     -0.000      1.000   -3.87e-05    3.87e-05
x13        -4.823e-09   1.18e-05     -0.000      1.000    -2.3e-05     2.3e-05
x14         -4.08e-08   4.99e-05     -0.001      0.999   -9.79e-05    9.78e-05
x15        -5.557e-09   2.03e-05     -0.000      1.000   -3.99e-05    3.99e-05
x16        -3.541e-09    1.3e-05     -0.000      1.000   -2.55e-05    2.55e-05
x17        -3.463e-09   1.51e-05     -0.000      1.000   -2.97e-05    2.97e-05
x18        -1.534e-08      4e-05     -0.000      1.000   -7.85e-05    7.85e-05
x19        -6.118e-09   2.07e-05     -0.000      1.000   -4.05e-05    4.05e-05
x20        -1.581e-08   3.38e-05     -0.000      1.000   -6.62e-05    6.61e-05
x21        -5.505e-08    5.6e-05     -0.001      0.999      -0.000       0.000
x22        -2.936e-08   4.55e-05     -0.001      0.999   -8.92e-05    8.92e-05
x23        -3.882e-08   4.89e-05     -0.001      0.999   -9.58e-05    9.57e-05
x24        -2.099e-08   4.87e-05     -0.000      1.000   -9.54e-05    9.54e-05
ma.L1         -1.3900   1.23e-07  -1.13e+07      0.000      -1.390      -1.390
ma.L2          0.4044   1.43e-07   2.82e+06      0.000       0.404       0.404
sigma2      7.525e-11   7.22e-11      1.042      0.297   -6.63e-11    2.17e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.84   Jarque-Bera (JB):           1335305.59
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.09   Skew:                             5.74
Prob(H) (two-sided):                  0.00   Kurtosis:                       202.19
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.77e+23. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.42203, saving model to LSTM7.h5
58/58 - 3s - loss: 0.3093 - mse: 0.3093 - mae: 0.3561 - val_loss: 0.4220 - val_mse: 0.4220 - val_mae: 0.6374 - lr: 0.0010 - 3s/epoch - 51ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.42203 to 0.25761, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0252 - mse: 0.0252 - mae: 0.1265 - val_loss: 0.2576 - val_mse: 0.2576 - val_mae: 0.4938 - lr: 0.0010 - 305ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.25761 to 0.18104, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0194 - mse: 0.0194 - mae: 0.1105 - val_loss: 0.1810 - val_mse: 0.1810 - val_mae: 0.4104 - lr: 0.0010 - 281ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.18104 to 0.13620, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0157 - mse: 0.0157 - mae: 0.1005 - val_loss: 0.1362 - val_mse: 0.1362 - val_mae: 0.3531 - lr: 0.0010 - 286ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.13620 to 0.10533, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0120 - mse: 0.0120 - mae: 0.0861 - val_loss: 0.1053 - val_mse: 0.1053 - val_mae: 0.3076 - lr: 0.0010 - 266ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.10533 to 0.09835, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0873 - val_loss: 0.0983 - val_mse: 0.0983 - val_mae: 0.2965 - lr: 0.0010 - 250ms/epoch - 4ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.09835 to 0.07233, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0808 - val_loss: 0.0723 - val_mse: 0.0723 - val_mae: 0.2497 - lr: 0.0010 - 277ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.07233 to 0.07117, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0778 - val_loss: 0.0712 - val_mse: 0.0712 - val_mae: 0.2472 - lr: 0.0010 - 264ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.07117 to 0.04473, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0739 - val_loss: 0.0447 - val_mse: 0.0447 - val_mae: 0.1884 - lr: 0.0010 - 287ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04473
58/58 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0787 - val_loss: 0.0562 - val_mse: 0.0562 - val_mae: 0.2155 - lr: 0.0010 - 255ms/epoch - 4ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.04473 to 0.04300, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0682 - val_loss: 0.0430 - val_mse: 0.0430 - val_mae: 0.1846 - lr: 0.0010 - 255ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04300
58/58 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0679 - val_loss: 0.0515 - val_mse: 0.0515 - val_mae: 0.2046 - lr: 0.0010 - 257ms/epoch - 4ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04300
58/58 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0663 - val_loss: 0.0443 - val_mse: 0.0443 - val_mae: 0.1869 - lr: 0.0010 - 268ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04300
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0614 - val_loss: 0.0594 - val_mse: 0.0594 - val_mae: 0.2221 - lr: 0.0010 - 281ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04300
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0627 - val_loss: 0.0463 - val_mse: 0.0463 - val_mae: 0.1911 - lr: 0.0010 - 259ms/epoch - 4ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00016: val_loss did not improve from 0.04300
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0619 - val_loss: 0.0496 - val_mse: 0.0496 - val_mae: 0.1986 - lr: 0.0010 - 262ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.04300 to 0.03942, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0679 - val_loss: 0.0394 - val_mse: 0.0394 - val_mae: 0.1730 - lr: 1.0000e-04 - 284ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0626 - val_loss: 0.0411 - val_mse: 0.0411 - val_mae: 0.1774 - lr: 1.0000e-04 - 255ms/epoch - 4ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0563 - val_loss: 0.0432 - val_mse: 0.0432 - val_mae: 0.1829 - lr: 1.0000e-04 - 254ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0546 - val_loss: 0.0439 - val_mse: 0.0439 - val_mae: 0.1847 - lr: 1.0000e-04 - 275ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0539 - val_loss: 0.0449 - val_mse: 0.0449 - val_mae: 0.1872 - lr: 1.0000e-04 - 270ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00022: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.0454 - val_mse: 0.0454 - val_mae: 0.1884 - lr: 1.0000e-04 - 270ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0486 - val_loss: 0.0454 - val_mse: 0.0454 - val_mae: 0.1883 - lr: 1.0000e-05 - 308ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0507 - val_loss: 0.0452 - val_mse: 0.0452 - val_mae: 0.1879 - lr: 1.0000e-05 - 287ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0498 - val_loss: 0.0452 - val_mse: 0.0452 - val_mae: 0.1880 - lr: 1.0000e-05 - 226ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0511 - val_loss: 0.0452 - val_mse: 0.0452 - val_mae: 0.1879 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00027: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0515 - val_loss: 0.0453 - val_mse: 0.0453 - val_mae: 0.1882 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0512 - val_loss: 0.0453 - val_mse: 0.0453 - val_mae: 0.1882 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0497 - val_loss: 0.0452 - val_mse: 0.0452 - val_mae: 0.1879 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0508 - val_loss: 0.0452 - val_mse: 0.0452 - val_mae: 0.1878 - lr: 1.0000e-05 - 283ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0510 - val_loss: 0.0451 - val_mse: 0.0451 - val_mae: 0.1877 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0517 - val_loss: 0.0450 - val_mse: 0.0450 - val_mae: 0.1874 - lr: 1.0000e-05 - 239ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0534 - val_loss: 0.0449 - val_mse: 0.0449 - val_mae: 0.1872 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0490 - val_loss: 0.0449 - val_mse: 0.0449 - val_mae: 0.1872 - lr: 1.0000e-05 - 284ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0490 - val_loss: 0.0448 - val_mse: 0.0448 - val_mae: 0.1870 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0476 - val_loss: 0.0451 - val_mse: 0.0451 - val_mae: 0.1878 - lr: 1.0000e-05 - 249ms/epoch - 4ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0507 - val_loss: 0.0452 - val_mse: 0.0452 - val_mae: 0.1878 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0511 - val_loss: 0.0452 - val_mse: 0.0452 - val_mae: 0.1880 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.0452 - val_mse: 0.0452 - val_mae: 0.1880 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0500 - val_loss: 0.0451 - val_mse: 0.0451 - val_mae: 0.1876 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0481 - val_loss: 0.0451 - val_mse: 0.0451 - val_mae: 0.1876 - lr: 1.0000e-05 - 301ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0492 - val_loss: 0.0447 - val_mse: 0.0447 - val_mae: 0.1867 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0481 - val_loss: 0.0450 - val_mse: 0.0450 - val_mae: 0.1875 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0481 - val_loss: 0.0451 - val_mse: 0.0451 - val_mae: 0.1878 - lr: 1.0000e-05 - 315ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0506 - val_loss: 0.0452 - val_mse: 0.0452 - val_mae: 0.1880 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0502 - val_loss: 0.0452 - val_mse: 0.0452 - val_mae: 0.1880 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0491 - val_loss: 0.0453 - val_mse: 0.0453 - val_mae: 0.1882 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0499 - val_loss: 0.0453 - val_mse: 0.0453 - val_mae: 0.1881 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0505 - val_loss: 0.0452 - val_mse: 0.0452 - val_mae: 0.1879 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0480 - val_loss: 0.0456 - val_mse: 0.0456 - val_mae: 0.1887 - lr: 1.0000e-05 - 250ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0485 - val_loss: 0.0455 - val_mse: 0.0455 - val_mae: 0.1885 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0488 - val_loss: 0.0453 - val_mse: 0.0453 - val_mae: 0.1881 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0477 - val_loss: 0.0453 - val_mse: 0.0453 - val_mae: 0.1881 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0473 - val_loss: 0.0456 - val_mse: 0.0456 - val_mae: 0.1887 - lr: 1.0000e-05 - 231ms/epoch - 4ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0457 - val_loss: 0.0455 - val_mse: 0.0455 - val_mae: 0.1885 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0488 - val_loss: 0.0455 - val_mse: 0.0455 - val_mae: 0.1886 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0486 - val_loss: 0.0456 - val_mse: 0.0456 - val_mae: 0.1887 - lr: 1.0000e-05 - 239ms/epoch - 4ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0490 - val_loss: 0.0454 - val_mse: 0.0454 - val_mae: 0.1882 - lr: 1.0000e-05 - 239ms/epoch - 4ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0470 - val_loss: 0.0456 - val_mse: 0.0456 - val_mae: 0.1888 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0490 - val_loss: 0.0457 - val_mse: 0.0457 - val_mae: 0.1889 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0459 - val_loss: 0.0457 - val_mse: 0.0457 - val_mae: 0.1888 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0477 - val_loss: 0.0457 - val_mse: 0.0457 - val_mae: 0.1888 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0519 - val_loss: 0.0457 - val_mse: 0.0457 - val_mae: 0.1888 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0466 - val_loss: 0.0459 - val_mse: 0.0459 - val_mae: 0.1893 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0500 - val_loss: 0.0459 - val_mse: 0.0459 - val_mae: 0.1894 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0488 - val_loss: 0.0461 - val_mse: 0.0461 - val_mae: 0.1898 - lr: 1.0000e-05 - 261ms/epoch - 4ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.03942
58/58 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0481 - val_loss: 0.0459 - val_mse: 0.0459 - val_mae: 0.1893 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 00067: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 133.6684179910327 
RMSE:	 11.561505870388714 
MAPE:	 10.289389775089397

EMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 34.52931938659473 
RMSE:	 5.876165364129462 
MAPE:	 4.852473639818076

WMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.74468487644727 
RMSE:	 6.061739426637149 
MAPE:	 4.85767480758183

DEMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 87.45802496937279 
RMSE:	 9.351899538028238 
MAPE:	 8.239361009856534

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 55.655998887145394 
RMSE:	 7.460294825752223 
MAPE:	 6.325008398714769

MIDPOINT
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 28.58496814832563 
RMSE:	 5.346491199686541 
MAPE:	 4.412567377496203
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17000.569, Time=3.09 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-15576.554, Time=5.76 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16078.305, Time=8.09 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-15574.554, Time=9.35 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16998.627, Time=3.13 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16429.916, Time=12.60 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17000.664, Time=3.17 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-15700.026, Time=12.26 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-15704.282, Time=15.13 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-16998.664, Time=3.57 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 76.160 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8527.332
Date:                Sun, 12 Dec 2021   AIC                         -17000.664
Time:                        17:44:29   BIC                         -16874.011
Sample:                             0   HQIC                        -16952.024
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          8.378e-14   2.16e-06   3.89e-08      1.000   -4.23e-06    4.23e-06
x2          7.457e-14   2.15e-06   3.47e-08      1.000   -4.22e-06    4.22e-06
x3          2.279e-14   2.16e-06   1.05e-08      1.000   -4.24e-06    4.24e-06
x4             1.0000   2.16e-06   4.63e+05      0.000       1.000       1.000
x5          1.211e-12   2.07e-06   5.86e-07      1.000   -4.05e-06    4.05e-06
x6          3.146e-15   2.67e-06   1.18e-09      1.000   -5.23e-06    5.23e-06
x7          1.593e-13   2.15e-06   7.41e-08      1.000   -4.21e-06    4.21e-06
x8            -0.0001    2.1e-06    -48.778      0.000      -0.000   -9.82e-05
x9          5.141e-14   6.35e-07    8.1e-08      1.000   -1.24e-06    1.24e-06
x10        -6.174e-05   1.34e-06    -45.995      0.000   -6.44e-05   -5.91e-05
x11            0.0003   2.15e-06    148.354      0.000       0.000       0.000
x12           -0.0002   2.02e-06    -93.730      0.000      -0.000      -0.000
x13         1.967e-14   2.16e-06    9.1e-09      1.000   -4.23e-06    4.23e-06
x14        -1.297e-14   5.65e-06  -2.29e-09      1.000   -1.11e-05    1.11e-05
x15         -3.18e-12   1.82e-06  -1.75e-06      1.000   -3.57e-06    3.57e-06
x16        -1.426e-12   4.51e-06  -3.16e-07      1.000   -8.84e-06    8.84e-06
x17         7.474e-13   2.37e-06   3.16e-07      1.000   -4.64e-06    4.64e-06
x18         -2.92e-13    2.9e-06  -1.01e-07      1.000   -5.68e-06    5.68e-06
x19        -4.211e-14   1.89e-06  -2.22e-08      1.000   -3.71e-06    3.71e-06
x20        -1.515e-13    1.2e-06  -1.26e-07      1.000   -2.36e-06    2.36e-06
x21         6.555e-13   6.37e-06   1.03e-07      1.000   -1.25e-05    1.25e-05
x22         1.212e-14   6.19e-06   1.96e-09      1.000   -1.21e-05    1.21e-05
x23        -3.877e-13   3.76e-06  -1.03e-07      1.000   -7.38e-06    7.38e-06
x24         8.127e-15   4.01e-06   2.03e-09      1.000   -7.86e-06    7.86e-06
ma.L1         -1.3370   3.84e-12  -3.48e+11      0.000      -1.337      -1.337
ma.L2          0.4289   1.65e-12    2.6e+11      0.000       0.429       0.429
sigma2          1e-10   6.99e-11      1.430      0.153   -3.71e-11    2.37e-10
===================================================================================
Ljung-Box (L1) (Q):                   4.57   Jarque-Bera (JB):           3228712.87
Prob(Q):                              0.03   Prob(JB):                         0.00
Heteroskedasticity (H):               0.12   Skew:                            -9.87
Prob(H) (two-sided):                  0.00   Kurtosis:                       312.63
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.57e+30. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03552, saving model to LSTM7.h5
43/43 - 2s - loss: 0.3092 - mse: 0.3092 - mae: 0.3824 - val_loss: 0.0355 - val_mse: 0.0355 - val_mae: 0.1573 - lr: 0.0010 - 2s/epoch - 56ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.03552 to 0.03289, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0320 - mse: 0.0320 - mae: 0.1413 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1486 - lr: 0.0010 - 227ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03289
43/43 - 0s - loss: 0.0216 - mse: 0.0216 - mae: 0.1169 - val_loss: 0.0346 - val_mse: 0.0346 - val_mae: 0.1516 - lr: 0.0010 - 194ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.03289
43/43 - 0s - loss: 0.0166 - mse: 0.0166 - mae: 0.1031 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1479 - lr: 0.0010 - 196ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.03289 to 0.02738, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0137 - mse: 0.0137 - mae: 0.0924 - val_loss: 0.0274 - val_mse: 0.0274 - val_mae: 0.1340 - lr: 0.0010 - 212ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.02738 to 0.02701, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0823 - val_loss: 0.0270 - val_mse: 0.0270 - val_mae: 0.1335 - lr: 0.0010 - 194ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.02701 to 0.02053, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0819 - val_loss: 0.0205 - val_mse: 0.0205 - val_mae: 0.1143 - lr: 0.0010 - 188ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.02053 to 0.01771, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0802 - val_loss: 0.0177 - val_mse: 0.0177 - val_mae: 0.1057 - lr: 0.0010 - 210ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.01771 to 0.01332, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0745 - val_loss: 0.0133 - val_mse: 0.0133 - val_mae: 0.0930 - lr: 0.0010 - 223ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.01332 to 0.01268, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0725 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0895 - lr: 0.0010 - 200ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.01268 to 0.01164, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0678 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0861 - lr: 0.0010 - 234ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.01164 to 0.01039, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0654 - val_loss: 0.0104 - val_mse: 0.0104 - val_mae: 0.0826 - lr: 0.0010 - 232ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.01039 to 0.00954, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0676 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0808 - lr: 0.0010 - 230ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.00954 to 0.00885, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0614 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0771 - lr: 0.0010 - 239ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.00885 to 0.00870, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0624 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0785 - lr: 0.0010 - 196ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00870
43/43 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0601 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0849 - lr: 0.0010 - 174ms/epoch - 4ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.00870 to 0.00834, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0577 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0776 - lr: 0.0010 - 211ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0567 - val_loss: 0.0102 - val_mse: 0.0102 - val_mae: 0.0856 - lr: 0.0010 - 194ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0539 - val_loss: 0.0104 - val_mse: 0.0104 - val_mae: 0.0859 - lr: 0.0010 - 181ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0538 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0898 - lr: 0.0010 - 221ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0560 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.1046 - lr: 0.0010 - 184ms/epoch - 4ms/step
Epoch 22/500

Epoch 00022: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00022: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0535 - val_loss: 0.0100 - val_mse: 0.0100 - val_mae: 0.0843 - lr: 0.0010 - 221ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0520 - val_loss: 0.0098 - val_mse: 0.0098 - val_mae: 0.0837 - lr: 1.0000e-04 - 201ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0545 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0844 - lr: 1.0000e-04 - 219ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0499 - val_loss: 0.0104 - val_mse: 0.0104 - val_mae: 0.0864 - lr: 1.0000e-04 - 196ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0514 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0873 - lr: 1.0000e-04 - 220ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00027: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0498 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0891 - lr: 1.0000e-04 - 184ms/epoch - 4ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0478 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0890 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0462 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0890 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0472 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0892 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0472 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0892 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00032: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0486 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0891 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0450 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0891 - lr: 1.0000e-05 - 175ms/epoch - 4ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0472 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0890 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0462 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0891 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0466 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0894 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0481 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0893 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0469 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0894 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0486 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0896 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0473 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0896 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0464 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0896 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0470 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0896 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0475 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0895 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0473 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0895 - lr: 1.0000e-05 - 202ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0456 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0893 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0477 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0895 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0478 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0896 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0470 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0895 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0482 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0895 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0475 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0894 - lr: 1.0000e-05 - 245ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0467 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0895 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0477 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0893 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0474 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0892 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0451 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0893 - lr: 1.0000e-05 - 196ms/epoch - 5ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0483 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0893 - lr: 1.0000e-05 - 196ms/epoch - 5ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0474 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0894 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0473 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0894 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0465 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0895 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0482 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0898 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0467 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0896 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0444 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0896 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0465 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0893 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0463 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0895 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0448 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0898 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0448 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0900 - lr: 1.0000e-05 - 196ms/epoch - 5ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0448 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0900 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.00834
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0442 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0901 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 00067: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 133.6684179910327 
RMSE:	 11.561505870388714 
MAPE:	 10.289389775089397

EMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 34.52931938659473 
RMSE:	 5.876165364129462 
MAPE:	 4.852473639818076

WMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.74468487644727 
RMSE:	 6.061739426637149 
MAPE:	 4.85767480758183

DEMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 87.45802496937279 
RMSE:	 9.351899538028238 
MAPE:	 8.239361009856534

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 55.655998887145394 
RMSE:	 7.460294825752223 
MAPE:	 6.325008398714769

MIDPOINT
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 28.58496814832563 
RMSE:	 5.346491199686541 
MAPE:	 4.412567377496203

T3
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 127.72960460138046 
RMSE:	 11.301752280128088 
MAPE:	 9.16046603172084
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16762.799, Time=4.57 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14158.507, Time=2.71 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16445.598, Time=8.48 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-16144.282, Time=10.93 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15275.101, Time=9.23 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15897.090, Time=12.93 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16446.973, Time=9.53 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16567.628, Time=3.34 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16523.926, Time=3.65 sec
 ARIMA(1,3,1)(0,0,0)[0] intercept   : AIC=-16696.008, Time=3.22 sec

Best model:  ARIMA(1,3,1)(0,0,0)[0]          
Total fit time: 68.599 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 1)   Log Likelihood                8408.400
Date:                Sun, 12 Dec 2021   AIC                         -16762.799
Time:                        17:50:11   BIC                         -16636.147
Sample:                             0   HQIC                        -16714.159
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.289e-07      0.001     -0.000      1.000      -0.002       0.002
x2         -5.288e-07      0.001     -0.001      0.999      -0.002       0.002
x3         -5.306e-07      0.001     -0.000      1.000      -0.002       0.002
x4             1.0000      0.000   2045.695      0.000       0.999       1.001
x5         -5.041e-07      0.000     -0.001      0.999      -0.001       0.001
x6         -9.879e-07   4.33e-05     -0.023      0.982   -8.58e-05    8.38e-05
x7         -5.185e-07      0.001     -0.001      0.999      -0.001       0.001
x8             0.0001      0.000      0.643      0.520      -0.000       0.001
x9          9.794e-08      0.001      0.000      1.000      -0.001       0.001
x10            0.0001      0.000      0.313      0.754      -0.001       0.001
x11           -0.0004      0.000     -2.284      0.022      -0.001   -6.06e-05
x12            0.0005      0.000      2.453      0.014       0.000       0.001
x13        -5.277e-07      0.000     -0.002      0.999      -0.001       0.001
x14        -1.566e-06      0.000     -0.005      0.996      -0.001       0.001
x15        -5.136e-07   9.86e-05     -0.005      0.996      -0.000       0.000
x16         -7.66e-07      0.000     -0.002      0.999      -0.001       0.001
x17        -5.146e-07      0.000     -0.003      0.998      -0.000       0.000
x18        -1.701e-07      0.001     -0.000      1.000      -0.001       0.001
x19         -5.77e-07   8.54e-05     -0.007      0.995      -0.000       0.000
x20         5.026e-07      0.001      0.001      0.999      -0.001       0.001
x21        -2.058e-06      0.000     -0.010      0.992      -0.000       0.000
x22        -1.098e-06      0.001     -0.001      0.999      -0.003       0.003
x23        -1.472e-06      0.001     -0.003      0.998      -0.001       0.001
x24        -8.255e-07      0.001     -0.001      0.999      -0.002       0.002
ar.L1         -0.2866   3.63e-05  -7897.273      0.000      -0.287      -0.287
ma.L1         -0.9124   1.46e-06  -6.25e+05      0.000      -0.912      -0.912
sigma2       9.98e-11   7.23e-11      1.380      0.168    -4.2e-11    2.42e-10
===================================================================================
Ljung-Box (L1) (Q):                  83.51   Jarque-Bera (JB):           4742889.91
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            -5.71
Prob(H) (two-sided):                  0.00   Kurtosis:                       378.86
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.2e+22. Standard errors may be unstable.
ARIMA order: (1, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.07750, saving model to LSTM7.h5
90/90 - 3s - loss: 0.1902 - mse: 0.1902 - mae: 0.2954 - val_loss: 0.0775 - val_mse: 0.0775 - val_mae: 0.2486 - lr: 0.0010 - 3s/epoch - 34ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.07750
90/90 - 0s - loss: 0.0241 - mse: 0.0241 - mae: 0.1217 - val_loss: 0.1034 - val_mse: 0.1034 - val_mae: 0.2967 - lr: 0.0010 - 382ms/epoch - 4ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.07750 to 0.06287, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0182 - mse: 0.0182 - mae: 0.1045 - val_loss: 0.0629 - val_mse: 0.0629 - val_mae: 0.2273 - lr: 0.0010 - 372ms/epoch - 4ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.06287 to 0.06190, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0140 - mse: 0.0140 - mae: 0.0924 - val_loss: 0.0619 - val_mse: 0.0619 - val_mae: 0.2270 - lr: 0.0010 - 383ms/epoch - 4ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0798 - val_loss: 0.0712 - val_mse: 0.0712 - val_mae: 0.2454 - lr: 0.0010 - 353ms/epoch - 4ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0726 - val_loss: 0.0844 - val_mse: 0.0844 - val_mae: 0.2703 - lr: 0.0010 - 403ms/epoch - 4ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0701 - val_loss: 0.0739 - val_mse: 0.0739 - val_mae: 0.2512 - lr: 0.0010 - 374ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0703 - val_loss: 0.1104 - val_mse: 0.1104 - val_mae: 0.3121 - lr: 0.0010 - 440ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00009: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0709 - val_loss: 0.1088 - val_mse: 0.1088 - val_mae: 0.3094 - lr: 0.0010 - 372ms/epoch - 4ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0174 - mse: 0.0174 - mae: 0.1065 - val_loss: 0.0653 - val_mse: 0.0653 - val_mae: 0.2330 - lr: 1.0000e-04 - 353ms/epoch - 4ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0716 - val_loss: 0.0680 - val_mse: 0.0680 - val_mae: 0.2385 - lr: 1.0000e-04 - 384ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0652 - val_loss: 0.0700 - val_mse: 0.0700 - val_mae: 0.2423 - lr: 1.0000e-04 - 399ms/epoch - 4ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0652 - val_loss: 0.0715 - val_mse: 0.0715 - val_mae: 0.2450 - lr: 1.0000e-04 - 404ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00014: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0617 - val_loss: 0.0725 - val_mse: 0.0725 - val_mae: 0.2467 - lr: 1.0000e-04 - 382ms/epoch - 4ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0583 - val_loss: 0.0730 - val_mse: 0.0730 - val_mae: 0.2478 - lr: 1.0000e-05 - 349ms/epoch - 4ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0591 - val_loss: 0.0734 - val_mse: 0.0734 - val_mae: 0.2485 - lr: 1.0000e-05 - 370ms/epoch - 4ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0563 - val_loss: 0.0739 - val_mse: 0.0739 - val_mae: 0.2494 - lr: 1.0000e-05 - 356ms/epoch - 4ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0551 - val_loss: 0.0744 - val_mse: 0.0744 - val_mae: 0.2503 - lr: 1.0000e-05 - 361ms/epoch - 4ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00019: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0576 - val_loss: 0.0743 - val_mse: 0.0743 - val_mae: 0.2503 - lr: 1.0000e-05 - 479ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0566 - val_loss: 0.0750 - val_mse: 0.0750 - val_mae: 0.2515 - lr: 1.0000e-05 - 374ms/epoch - 4ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0578 - val_loss: 0.0753 - val_mse: 0.0753 - val_mae: 0.2522 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0561 - val_loss: 0.0756 - val_mse: 0.0756 - val_mae: 0.2527 - lr: 1.0000e-05 - 400ms/epoch - 4ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0569 - val_loss: 0.0757 - val_mse: 0.0757 - val_mae: 0.2528 - lr: 1.0000e-05 - 379ms/epoch - 4ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0551 - val_loss: 0.0758 - val_mse: 0.0758 - val_mae: 0.2530 - lr: 1.0000e-05 - 396ms/epoch - 4ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0559 - val_loss: 0.0756 - val_mse: 0.0756 - val_mae: 0.2527 - lr: 1.0000e-05 - 394ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0552 - val_loss: 0.0758 - val_mse: 0.0758 - val_mae: 0.2530 - lr: 1.0000e-05 - 484ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0566 - val_loss: 0.0759 - val_mse: 0.0759 - val_mae: 0.2532 - lr: 1.0000e-05 - 381ms/epoch - 4ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0524 - val_loss: 0.0765 - val_mse: 0.0765 - val_mae: 0.2543 - lr: 1.0000e-05 - 353ms/epoch - 4ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0570 - val_loss: 0.0767 - val_mse: 0.0767 - val_mae: 0.2547 - lr: 1.0000e-05 - 370ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0574 - val_loss: 0.0764 - val_mse: 0.0764 - val_mae: 0.2541 - lr: 1.0000e-05 - 364ms/epoch - 4ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0556 - val_loss: 0.0771 - val_mse: 0.0771 - val_mae: 0.2553 - lr: 1.0000e-05 - 347ms/epoch - 4ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0549 - val_loss: 0.0778 - val_mse: 0.0778 - val_mae: 0.2566 - lr: 1.0000e-05 - 358ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0562 - val_loss: 0.0775 - val_mse: 0.0775 - val_mae: 0.2561 - lr: 1.0000e-05 - 365ms/epoch - 4ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0520 - val_loss: 0.0776 - val_mse: 0.0776 - val_mae: 0.2563 - lr: 1.0000e-05 - 338ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0545 - val_loss: 0.0781 - val_mse: 0.0781 - val_mae: 0.2571 - lr: 1.0000e-05 - 364ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0529 - val_loss: 0.0783 - val_mse: 0.0783 - val_mae: 0.2575 - lr: 1.0000e-05 - 373ms/epoch - 4ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0538 - val_loss: 0.0783 - val_mse: 0.0783 - val_mae: 0.2575 - lr: 1.0000e-05 - 362ms/epoch - 4ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0527 - val_loss: 0.0789 - val_mse: 0.0789 - val_mae: 0.2586 - lr: 1.0000e-05 - 352ms/epoch - 4ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0522 - val_loss: 0.0798 - val_mse: 0.0798 - val_mae: 0.2603 - lr: 1.0000e-05 - 345ms/epoch - 4ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0494 - val_loss: 0.0796 - val_mse: 0.0796 - val_mae: 0.2598 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0512 - val_loss: 0.0799 - val_mse: 0.0799 - val_mae: 0.2604 - lr: 1.0000e-05 - 394ms/epoch - 4ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0515 - val_loss: 0.0794 - val_mse: 0.0794 - val_mae: 0.2595 - lr: 1.0000e-05 - 348ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0535 - val_loss: 0.0796 - val_mse: 0.0796 - val_mae: 0.2598 - lr: 1.0000e-05 - 355ms/epoch - 4ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0520 - val_loss: 0.0800 - val_mse: 0.0800 - val_mae: 0.2605 - lr: 1.0000e-05 - 350ms/epoch - 4ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0534 - val_loss: 0.0805 - val_mse: 0.0805 - val_mae: 0.2615 - lr: 1.0000e-05 - 357ms/epoch - 4ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0543 - val_loss: 0.0792 - val_mse: 0.0792 - val_mae: 0.2590 - lr: 1.0000e-05 - 363ms/epoch - 4ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0515 - val_loss: 0.0785 - val_mse: 0.0785 - val_mae: 0.2578 - lr: 1.0000e-05 - 354ms/epoch - 4ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0522 - val_loss: 0.0788 - val_mse: 0.0788 - val_mae: 0.2583 - lr: 1.0000e-05 - 371ms/epoch - 4ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0506 - val_loss: 0.0802 - val_mse: 0.0802 - val_mae: 0.2607 - lr: 1.0000e-05 - 351ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0512 - val_loss: 0.0797 - val_mse: 0.0797 - val_mae: 0.2598 - lr: 1.0000e-05 - 355ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0484 - val_loss: 0.0796 - val_mse: 0.0796 - val_mae: 0.2597 - lr: 1.0000e-05 - 348ms/epoch - 4ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0525 - val_loss: 0.0804 - val_mse: 0.0804 - val_mae: 0.2611 - lr: 1.0000e-05 - 364ms/epoch - 4ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0523 - val_loss: 0.0783 - val_mse: 0.0783 - val_mae: 0.2573 - lr: 1.0000e-05 - 434ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.06190
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0530 - val_loss: 0.0777 - val_mse: 0.0777 - val_mae: 0.2562 - lr: 1.0000e-05 - 389ms/epoch - 4ms/step
Epoch 00054: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 133.6684179910327 
RMSE:	 11.561505870388714 
MAPE:	 10.289389775089397

EMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 34.52931938659473 
RMSE:	 5.876165364129462 
MAPE:	 4.852473639818076

WMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.74468487644727 
RMSE:	 6.061739426637149 
MAPE:	 4.85767480758183

DEMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 87.45802496937279 
RMSE:	 9.351899538028238 
MAPE:	 8.239361009856534

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 55.655998887145394 
RMSE:	 7.460294825752223 
MAPE:	 6.325008398714769

MIDPOINT
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 28.58496814832563 
RMSE:	 5.346491199686541 
MAPE:	 4.412567377496203

T3
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 127.72960460138046 
RMSE:	 11.301752280128088 
MAPE:	 9.16046603172084

TEMA
Prediction vs Close:		49.63% Accuracy
Prediction vs Prediction:	49.25% Accuracy
MSE:	 34.19976389234231 
RMSE:	 5.8480564200717415 
MAPE:	 5.017065986818192
Runtime: mins: 53.440131934050015

Architecture Used

In [134]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment7.png to Experiment7 (1).png
In [135]:
img = cv2.imread('Experiment5.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[135]:
<matplotlib.image.AxesImage at 0x7f75dc198a90>

Model Plots

In [110]:
with open('simulation7_data.json') as json_file:
    simulation7 = json.load(json_file)
fileimg = 'Experiment7'
In [111]:
SIM = list(simulation7.keys())[i]
  plot_train(simulation7,SIM)
  plot_test(simulation7,SIM)
----- Train RMSE for TEMA ----- 7.535421816948624
----- Train_MSE_LSTM for TEMA ----- 56.7825819593453
----- Train MAE LSTM for TEMA ----- 5.2132834616768555
----- Test RMSE for TEMA----- 5.8480564200717415
----- Test_MSE_LSTM for TEMA----- 34.19976389234231
----- Test_MAE_LSTM for TEMA----- 5.017065986818192

Arima w Exogenous Variable Multistep MutiVariate LSTM Hybrid Model Experiment 8

In [137]:
def get_arima_exog(dataframe,original_data, train_len, test_len):    
    

    # prepare train and test data for exogenous vr
    X_value = pd.DataFrame(low_vol.iloc[:, :])
    y_value = pd.DataFrame(low_vol.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    X_scale_dataset = X_scaler.fit_transform(X_value)
    y_scale_dataset = y_scaler.fit_transform(y_value)
    # Get data and check shape
    # X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X_scale_dataset)
    y_train, y_test, = split_train_test(y_scale_dataset)
    yc_train,yc_test = split_train_test(low_vol_data)
    yc = yc_test.values.tolist()
    y_train_list = y_train.flatten().tolist()
    y_test_list = y_test.flatten().tolist()
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)

    # Initialize model
    model = auto_arima(y_train_list,exogenous  = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
            suppress_warnings=True,stepwise=True,seasonal=True)

      # Determine model parameters
    print(model.summary())
    model.fit(y_train_list,maxiter=200)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

      # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
        model = pmdarima.ARIMA(order=order)
        model.fit(y_train_list)
        # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')

        prediction.append(model.predict()[0])
        y_train_list.append(y_test_list[i])

    predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
    y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))

    # Generate error data
    mse = mean_squared_error(yc_test, predictionte)
    rmse = mse ** 0.5
    mae = mean_absolute_error(y_test_ , predictionte )
    return yc,predictionte.flatten().tolist(), mse, rmse, mae
In [138]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det =20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma+' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma+' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # #Option 4
    # # Set up & fit LSTM RNN
    model = Sequential()
    model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
    model.add(LSTM(units=int(lstm_len/2)))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='mean_squared_error', optimizer='adam')
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM8.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [139]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation8 = {}
    imgfile = 'Experiment8'
    for ma in optimized_period:
                print(ma)
                print(functions[ma])
                print ( int( optimized_period[ma]))
              # if ma == 'SMA':
                low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
                low_vol = low_vol.fillna(0)
                low_vol_data = df['close']
                high_vol = pd.DataFrame()
                df2 = df.copy()
                for i in df2.columns:
                  if i in low_vol.columns:
                    high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
                high_vol_data = df['close']
                ## *****************************************************
                # Generate ARIMA and LSTM predictions
                print('\nWorking on ' + ma + ' predictions')
                try:
                  print('parameters used : ', train_len, test_len)
                  low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
                except:
                    print('ARIMA error, skipping to next MA type')
                    continue
                Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
                final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
                mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
                rmse_ftr = mse_ftr ** 0.5
                mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
                mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

                final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
                mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
                rmse = mse ** 0.5
                mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                # Generate prediction accuracy
                actual = df['close'].tail(test_len).values
                result_1 = []
                result_2 = []
                for i in range(1, len(final_prediction)):
                    # Compare prediction to previous close price
                    if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                        result_1.append(1)
                    elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                        result_1.append(1)
                    else:
                        result_1.append(0)

                    # Compare prediction to previous prediction
                    if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                        result_2.append(1)
                    elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                        result_2.append(1)
                    else:
                        result_2.append(0)

                accuracy_1 = np.mean(result_1)
                accuracy_2 = np.mean(result_2)

                simulation8[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                              'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                  'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                              'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                  'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                              'rmse': rmse_ftr, 'mae' : mae_ftr},
                                  'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                            'rmse': rmse, 'mae': mae },
                                  'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

                # save simulation data here as checkpoint
                with open('simulation8_data.json', 'w') as fp:
                    json.dump(simulation8, fp)

                for ma in simulation8.keys():
                    print('\n' + ma)
                    print('Prediction vs Close:\t\t' + str(round(100*simulation8[ma]['accuracy']['prediction vs close'], 2))
                          + '% Accuracy')
                    print('Prediction vs Prediction:\t' + str(round(100*simulation8[ma]['accuracy']['prediction vs prediction'], 2))
                          + '% Accuracy')
                    print('MSE:\t', simulation8[ma]['final']['mse'],
                          '\nRMSE:\t', simulation8[ma]['final']['rmse'],
                          '\nMAPE:\t', simulation8[ma]['final']['mae'])#,
                          # '\nMAPE:\t', simulation[ma]['final']['mape'])
              # else:
              #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.786, Time=3.28 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.592, Time=4.67 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15578.394, Time=8.51 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.592, Time=7.38 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16966.361, Time=9.48 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16121.635, Time=9.85 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17214.069, Time=12.68 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.592, Time=9.21 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-14572.319, Time=9.59 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-14403.474, Time=41.33 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 115.999 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8634.035
Date:                Sun, 12 Dec 2021   AIC                         -17214.069
Time:                        18:02:30   BIC                         -17087.416
Sample:                             0   HQIC                        -17165.429
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -4.257e-09   9.55e-06     -0.000      1.000   -1.87e-05    1.87e-05
x2         -4.256e-09   9.56e-06     -0.000      1.000   -1.87e-05    1.87e-05
x3         -4.313e-09   9.62e-06     -0.000      1.000   -1.89e-05    1.88e-05
x4             1.0000   9.61e-06   1.04e+05      0.000       1.000       1.000
x5         -3.891e-09   9.14e-06     -0.000      1.000   -1.79e-05    1.79e-05
x6         -1.122e-08   1.03e-05     -0.001      0.999   -2.03e-05    2.03e-05
x7         -4.223e-09   9.54e-06     -0.000      1.000   -1.87e-05    1.87e-05
x8         -4.234e-09   9.55e-06     -0.000      1.000   -1.87e-05    1.87e-05
x9         -1.626e-10   6.54e-07     -0.000      1.000   -1.28e-06    1.28e-06
x10        -6.831e-10   2.91e-06     -0.000      1.000    -5.7e-06     5.7e-06
x11        -4.115e-09   9.41e-06     -0.000      1.000   -1.84e-05    1.84e-05
x12        -4.303e-09   9.62e-06     -0.000      1.000   -1.89e-05    1.88e-05
x13        -4.288e-09    9.6e-06     -0.000      1.000   -1.88e-05    1.88e-05
x14        -3.749e-08   2.81e-05     -0.001      0.999   -5.51e-05     5.5e-05
x15        -5.032e-09   1.04e-05     -0.000      1.000   -2.04e-05    2.03e-05
x16        -3.685e-09      9e-06     -0.000      1.000   -1.76e-05    1.76e-05
x17        -3.286e-09   8.45e-06     -0.000      1.000   -1.66e-05    1.66e-05
x18         -1.22e-08   1.59e-05     -0.001      0.999   -3.11e-05    3.11e-05
x19        -5.685e-09    1.1e-05     -0.001      1.000   -2.16e-05    2.16e-05
x20         -1.42e-08   1.69e-05     -0.001      0.999   -3.32e-05    3.32e-05
x21        -5.194e-08   3.31e-05     -0.002      0.999   -6.49e-05    6.48e-05
x22        -2.548e-08   2.31e-05     -0.001      0.999   -4.53e-05    4.52e-05
x23        -3.534e-08   2.73e-05     -0.001      0.999   -5.35e-05    5.34e-05
x24        -1.566e-08    1.8e-05     -0.001      0.999   -3.53e-05    3.53e-05
ma.L1         -1.3899   4.98e-09  -2.79e+08      0.000      -1.390      -1.390
ma.L2          0.4032   4.98e-09   8.09e+07      0.000       0.403       0.403
sigma2      7.635e-11   6.92e-11      1.103      0.270   -5.93e-11    2.12e-10
===================================================================================
Ljung-Box (L1) (Q):                  68.48   Jarque-Bera (JB):           5579791.06
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            10.12
Prob(H) (two-sided):                  0.00   Kurtosis:                       410.36
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.69e+24. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04823, saving model to LSTM8.h5
48/48 - 4s - loss: 1.3964 - val_loss: 0.0482 - lr: 0.0010 - 4s/epoch - 77ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04823
48/48 - 0s - loss: 1.3224 - val_loss: 0.0519 - lr: 0.0010 - 287ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04823
48/48 - 0s - loss: 1.2289 - val_loss: 0.0582 - lr: 0.0010 - 277ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04823
48/48 - 0s - loss: 1.1176 - val_loss: 0.0641 - lr: 0.0010 - 257ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04823
48/48 - 0s - loss: 1.0133 - val_loss: 0.0698 - lr: 0.0010 - 239ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.9452 - val_loss: 0.0754 - lr: 0.0010 - 273ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.9122 - val_loss: 0.0760 - lr: 1.0000e-04 - 286ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.9077 - val_loss: 0.0765 - lr: 1.0000e-04 - 232ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.9032 - val_loss: 0.0771 - lr: 1.0000e-04 - 228ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8988 - val_loss: 0.0777 - lr: 1.0000e-04 - 251ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8946 - val_loss: 0.0783 - lr: 1.0000e-04 - 277ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8919 - val_loss: 0.0784 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8915 - val_loss: 0.0785 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8911 - val_loss: 0.0785 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8907 - val_loss: 0.0786 - lr: 1.0000e-05 - 259ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8902 - val_loss: 0.0787 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8898 - val_loss: 0.0787 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8894 - val_loss: 0.0788 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8889 - val_loss: 0.0789 - lr: 1.0000e-05 - 249ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8885 - val_loss: 0.0790 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8881 - val_loss: 0.0790 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8876 - val_loss: 0.0791 - lr: 1.0000e-05 - 251ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8872 - val_loss: 0.0792 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8868 - val_loss: 0.0793 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8863 - val_loss: 0.0793 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8859 - val_loss: 0.0794 - lr: 1.0000e-05 - 311ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8855 - val_loss: 0.0795 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8850 - val_loss: 0.0796 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8846 - val_loss: 0.0797 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8842 - val_loss: 0.0797 - lr: 1.0000e-05 - 261ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8837 - val_loss: 0.0798 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8833 - val_loss: 0.0799 - lr: 1.0000e-05 - 252ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8828 - val_loss: 0.0800 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8824 - val_loss: 0.0801 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8820 - val_loss: 0.0802 - lr: 1.0000e-05 - 250ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8815 - val_loss: 0.0803 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8811 - val_loss: 0.0803 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8806 - val_loss: 0.0804 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8802 - val_loss: 0.0805 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8798 - val_loss: 0.0806 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8793 - val_loss: 0.0807 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8789 - val_loss: 0.0808 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8785 - val_loss: 0.0809 - lr: 1.0000e-05 - 297ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8780 - val_loss: 0.0810 - lr: 1.0000e-05 - 290ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8776 - val_loss: 0.0811 - lr: 1.0000e-05 - 251ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8771 - val_loss: 0.0812 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8767 - val_loss: 0.0812 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8763 - val_loss: 0.0813 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8758 - val_loss: 0.0814 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8754 - val_loss: 0.0815 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04823
48/48 - 0s - loss: 0.8750 - val_loss: 0.0816 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.82392201920814 
RMSE:	 4.880975519218278 
MAPE:	 3.848681389687446
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.778, Time=3.08 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.589, Time=4.43 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-14606.447, Time=6.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.589, Time=6.66 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15343.613, Time=9.81 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15047.583, Time=13.53 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16858.964, Time=11.94 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17024.022, Time=6.03 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-16998.618, Time=3.25 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17081.451, Time=6.60 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=16.83 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16997.990, Time=3.51 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-16992.667, Time=4.34 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 96.060 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.726
Date:                Sun, 12 Dec 2021   AIC                         -17081.451
Time:                        18:08:04   BIC                         -16945.417
Sample:                             0   HQIC                        -17029.208
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.316e-10   9.89e-05  -2.34e-06      1.000      -0.000       0.000
x2         -2.309e-10   9.88e-05  -2.34e-06      1.000      -0.000       0.000
x3         -2.325e-10   9.91e-05  -2.35e-06      1.000      -0.000       0.000
x4             1.0000    9.9e-05   1.01e+04      0.000       1.000       1.000
x5         -2.108e-10   9.43e-05  -2.24e-06      1.000      -0.000       0.000
x6         -7.997e-10      0.000  -4.63e-06      1.000      -0.000       0.000
x7         -2.295e-10   9.85e-05  -2.33e-06      1.000      -0.000       0.000
x8         -2.244e-10   9.74e-05   -2.3e-06      1.000      -0.000       0.000
x9         -1.166e-11   1.98e-05   -5.9e-07      1.000   -3.87e-05    3.87e-05
x10        -4.454e-11   4.19e-05  -1.06e-06      1.000   -8.22e-05    8.22e-05
x11        -2.219e-10   9.68e-05  -2.29e-06      1.000      -0.000       0.000
x12        -2.264e-10    9.8e-05  -2.31e-06      1.000      -0.000       0.000
x13        -2.315e-10   9.89e-05  -2.34e-06      1.000      -0.000       0.000
x14        -1.767e-09      0.000  -6.47e-06      1.000      -0.001       0.001
x15        -2.096e-10   9.38e-05  -2.23e-06      1.000      -0.000       0.000
x16        -5.257e-10      0.000   -3.5e-06      1.000      -0.000       0.000
x17        -2.143e-10   9.53e-05  -2.25e-06      1.000      -0.000       0.000
x18        -3.776e-11   3.61e-05  -1.05e-06      1.000   -7.08e-05    7.08e-05
x19         -2.52e-10      0.000  -2.41e-06      1.000      -0.000       0.000
x20        -2.417e-10   9.51e-05  -2.54e-06      1.000      -0.000       0.000
x21         -3.16e-09      0.000  -8.64e-06      1.000      -0.001       0.001
x22        -2.955e-09      0.000  -8.32e-06      1.000      -0.001       0.001
x23        -1.664e-09      0.000  -6.29e-06      1.000      -0.001       0.001
x24        -1.568e-09      0.000  -6.07e-06      1.000      -0.001       0.001
ar.L1         -0.4923    1.2e-09  -4.09e+08      0.000      -0.492      -0.492
ar.L2         -0.1923      7e-10  -2.75e+08      0.000      -0.192      -0.192
ar.L3         -0.0461   3.32e-10  -1.39e+08      0.000      -0.046      -0.046
ma.L1         -0.7077   2.73e-09  -2.59e+08      0.000      -0.708      -0.708
sigma2       8.99e-11   6.96e-11      1.291      0.197   -4.66e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.51   Jarque-Bera (JB):           4268313.90
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.44
Prob(H) (two-sided):                  0.00   Kurtosis:                       359.56
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.36e+28. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04775, saving model to LSTM8.h5
16/16 - 4s - loss: 1.4044 - val_loss: 0.0478 - lr: 0.0010 - 4s/epoch - 221ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.3789 - val_loss: 0.0488 - lr: 0.0010 - 109ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.3506 - val_loss: 0.0499 - lr: 0.0010 - 95ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.3191 - val_loss: 0.0511 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2858 - val_loss: 0.0524 - lr: 0.0010 - 90ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2524 - val_loss: 0.0538 - lr: 0.0010 - 100ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2313 - val_loss: 0.0540 - lr: 1.0000e-04 - 89ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2281 - val_loss: 0.0541 - lr: 1.0000e-04 - 98ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2251 - val_loss: 0.0543 - lr: 1.0000e-04 - 90ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2220 - val_loss: 0.0544 - lr: 1.0000e-04 - 100ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2191 - val_loss: 0.0546 - lr: 1.0000e-04 - 98ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2172 - val_loss: 0.0546 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2169 - val_loss: 0.0546 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2166 - val_loss: 0.0546 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2163 - val_loss: 0.0546 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2160 - val_loss: 0.0546 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2157 - val_loss: 0.0547 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2154 - val_loss: 0.0547 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2152 - val_loss: 0.0547 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2149 - val_loss: 0.0547 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2146 - val_loss: 0.0547 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2143 - val_loss: 0.0547 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2140 - val_loss: 0.0548 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2137 - val_loss: 0.0548 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2134 - val_loss: 0.0548 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2132 - val_loss: 0.0548 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2129 - val_loss: 0.0548 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2126 - val_loss: 0.0548 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2123 - val_loss: 0.0549 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2120 - val_loss: 0.0549 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2117 - val_loss: 0.0549 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2114 - val_loss: 0.0549 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2112 - val_loss: 0.0549 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2109 - val_loss: 0.0549 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2106 - val_loss: 0.0550 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2103 - val_loss: 0.0550 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2100 - val_loss: 0.0550 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2098 - val_loss: 0.0550 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2095 - val_loss: 0.0550 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2092 - val_loss: 0.0550 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2089 - val_loss: 0.0551 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2086 - val_loss: 0.0551 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2083 - val_loss: 0.0551 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2081 - val_loss: 0.0551 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2078 - val_loss: 0.0551 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2075 - val_loss: 0.0551 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2072 - val_loss: 0.0552 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2069 - val_loss: 0.0552 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2067 - val_loss: 0.0552 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2064 - val_loss: 0.0552 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04775
16/16 - 0s - loss: 1.2061 - val_loss: 0.0552 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.82392201920814 
RMSE:	 4.880975519218278 
MAPE:	 3.848681389687446

EMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 36.51946703896653 
RMSE:	 6.043133875644865 
MAPE:	 4.745071953502077
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.780, Time=2.87 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.589, Time=4.52 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16789.784, Time=11.92 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.589, Time=7.18 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16919.987, Time=9.64 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-14616.097, Time=11.83 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17225.955, Time=17.16 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.589, Time=9.13 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-15582.364, Time=17.99 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-12043.670, Time=36.81 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 129.063 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8639.977
Date:                Sun, 12 Dec 2021   AIC                         -17225.955
Time:                        18:19:12   BIC                         -17099.302
Sample:                             0   HQIC                        -17177.315
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -4.802e-09   4.51e-06     -0.001      0.999   -8.84e-06    8.83e-06
x2         -4.783e-09    4.5e-06     -0.001      0.999   -8.83e-06    8.82e-06
x3         -4.811e-09   4.51e-06     -0.001      0.999   -8.85e-06    8.84e-06
x4             1.0000   4.51e-06   2.22e+05      0.000       1.000       1.000
x5         -4.353e-09    4.3e-06     -0.001      0.999   -8.43e-06    8.42e-06
x6         -1.569e-08   7.54e-06     -0.002      0.998   -1.48e-05    1.48e-05
x7          -4.75e-09   4.49e-06     -0.001      0.999    -8.8e-06    8.79e-06
x8         -4.628e-09   4.43e-06     -0.001      0.999   -8.69e-06    8.69e-06
x9         -4.733e-10   1.16e-06     -0.000      1.000   -2.27e-06    2.27e-06
x10         -7.88e-10    1.8e-06     -0.000      1.000   -3.52e-06    3.52e-06
x11        -4.609e-09   4.42e-06     -0.001      0.999   -8.68e-06    8.67e-06
x12        -4.607e-09   4.42e-06     -0.001      0.999   -8.68e-06    8.67e-06
x13        -4.792e-09   4.51e-06     -0.001      0.999   -8.84e-06    8.83e-06
x14        -3.777e-08   1.24e-05     -0.003      0.998   -2.44e-05    2.44e-05
x15         -3.99e-09   4.12e-06     -0.001      0.999   -8.08e-06    8.07e-06
x16        -1.309e-08   7.41e-06     -0.002      0.999   -1.45e-05    1.45e-05
x17        -4.789e-09   4.51e-06     -0.001      0.999   -8.85e-06    8.84e-06
x18        -2.665e-10   9.77e-07     -0.000      1.000   -1.92e-06    1.92e-06
x19        -4.919e-09   4.56e-06     -0.001      0.999   -8.94e-06    8.93e-06
x20            -4e-10   9.58e-07     -0.000      1.000   -1.88e-06    1.88e-06
x21        -6.782e-08   1.67e-05     -0.004      0.997   -3.27e-05    3.26e-05
x22         -6.03e-08   1.58e-05     -0.004      0.997   -3.09e-05    3.08e-05
x23        -3.157e-08   1.14e-05     -0.003      0.998   -2.23e-05    2.23e-05
x24        -3.671e-08   1.23e-05     -0.003      0.998   -2.41e-05    2.41e-05
ma.L1         -1.3901   5.58e-10  -2.49e+09      0.000      -1.390      -1.390
ma.L2          0.4033   5.75e-10   7.02e+08      0.000       0.403       0.403
sigma2      7.525e-11   6.92e-11      1.088      0.277   -6.03e-11    2.11e-10
===================================================================================
Ljung-Box (L1) (Q):                  69.18   Jarque-Bera (JB):           6366427.21
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            12.29
Prob(H) (two-sided):                  0.00   Kurtosis:                       437.97
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.29e+25. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05123, saving model to LSTM8.h5
17/17 - 4s - loss: 1.4678 - val_loss: 0.0512 - lr: 0.0010 - 4s/epoch - 218ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.05123 to 0.05053, saving model to LSTM8.h5
17/17 - 0s - loss: 1.4292 - val_loss: 0.0505 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.05053 to 0.04967, saving model to LSTM8.h5
17/17 - 0s - loss: 1.3882 - val_loss: 0.0497 - lr: 0.0010 - 122ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.04967 to 0.04896, saving model to LSTM8.h5
17/17 - 0s - loss: 1.3479 - val_loss: 0.0490 - lr: 0.0010 - 121ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.04896 to 0.04876, saving model to LSTM8.h5
17/17 - 0s - loss: 1.3107 - val_loss: 0.0488 - lr: 0.0010 - 151ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.2726 - val_loss: 0.0492 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.2304 - val_loss: 0.0503 - lr: 0.0010 - 110ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.1883 - val_loss: 0.0519 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.1513 - val_loss: 0.0538 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.1196 - val_loss: 0.0558 - lr: 0.0010 - 106ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.1008 - val_loss: 0.0559 - lr: 1.0000e-04 - 99ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0981 - val_loss: 0.0561 - lr: 1.0000e-04 - 109ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0956 - val_loss: 0.0563 - lr: 1.0000e-04 - 118ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0930 - val_loss: 0.0565 - lr: 1.0000e-04 - 119ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0906 - val_loss: 0.0567 - lr: 1.0000e-04 - 112ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0890 - val_loss: 0.0567 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0887 - val_loss: 0.0567 - lr: 1.0000e-05 - 122ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0885 - val_loss: 0.0567 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0882 - val_loss: 0.0567 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0880 - val_loss: 0.0568 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0878 - val_loss: 0.0568 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0875 - val_loss: 0.0568 - lr: 1.0000e-05 - 138ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0873 - val_loss: 0.0568 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0870 - val_loss: 0.0568 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0868 - val_loss: 0.0569 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0866 - val_loss: 0.0569 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0863 - val_loss: 0.0569 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0861 - val_loss: 0.0569 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0858 - val_loss: 0.0570 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0856 - val_loss: 0.0570 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0854 - val_loss: 0.0570 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0851 - val_loss: 0.0570 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0849 - val_loss: 0.0570 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0846 - val_loss: 0.0571 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0844 - val_loss: 0.0571 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0842 - val_loss: 0.0571 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0839 - val_loss: 0.0571 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0837 - val_loss: 0.0571 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0834 - val_loss: 0.0572 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0832 - val_loss: 0.0572 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0830 - val_loss: 0.0572 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0827 - val_loss: 0.0572 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0825 - val_loss: 0.0573 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0822 - val_loss: 0.0573 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0820 - val_loss: 0.0573 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0818 - val_loss: 0.0573 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0815 - val_loss: 0.0573 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0813 - val_loss: 0.0574 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0810 - val_loss: 0.0574 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0808 - val_loss: 0.0574 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0806 - val_loss: 0.0574 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0803 - val_loss: 0.0575 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0801 - val_loss: 0.0575 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0799 - val_loss: 0.0575 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.04876
17/17 - 0s - loss: 1.0796 - val_loss: 0.0575 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.82392201920814 
RMSE:	 4.880975519218278 
MAPE:	 3.848681389687446

EMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 36.51946703896653 
RMSE:	 6.043133875644865 
MAPE:	 4.745071953502077

WMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 43.8622174139386 
RMSE:	 6.622855684214975 
MAPE:	 5.251112001385055
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.785, Time=3.17 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.588, Time=4.72 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15575.689, Time=9.20 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.588, Time=7.35 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16714.796, Time=9.20 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15610.140, Time=11.04 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17225.835, Time=22.73 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.588, Time=9.27 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16751.951, Time=21.21 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-11788.089, Time=32.23 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 130.137 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8639.917
Date:                Sun, 12 Dec 2021   AIC                         -17225.835
Time:                        18:25:56   BIC                         -17099.182
Sample:                             0   HQIC                        -17177.195
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.894e-09   3.61e-06     -0.002      0.999   -7.09e-06    7.08e-06
x2          -5.93e-09   3.63e-06     -0.002      0.999   -7.11e-06     7.1e-06
x3         -5.905e-09   3.62e-06     -0.002      0.999    -7.1e-06    7.09e-06
x4             1.0000   3.62e-06   2.76e+05      0.000       1.000       1.000
x5         -5.457e-09   3.48e-06     -0.002      0.999   -6.83e-06    6.82e-06
x6         -3.019e-08   7.72e-06     -0.004      0.997   -1.52e-05    1.51e-05
x7          -5.87e-09   3.61e-06     -0.002      0.999   -7.08e-06    7.07e-06
x8         -5.809e-09   3.59e-06     -0.002      0.999   -7.05e-06    7.04e-06
x9         -9.293e-11   9.83e-08     -0.001      0.999   -1.93e-07    1.93e-07
x10        -2.793e-09   2.47e-06     -0.001      0.999   -4.84e-06    4.84e-06
x11        -6.095e-09   3.68e-06     -0.002      0.999   -7.21e-06     7.2e-06
x12        -5.478e-09   3.49e-06     -0.002      0.999   -6.85e-06    6.84e-06
x13         -5.91e-09   3.62e-06     -0.002      0.999    -7.1e-06    7.09e-06
x14        -4.085e-08   9.35e-06     -0.004      0.997   -1.84e-05    1.83e-05
x15         -5.93e-09   3.63e-06     -0.002      0.999   -7.12e-06    7.11e-06
x16        -1.618e-09   1.92e-06     -0.001      0.999   -3.76e-06    3.75e-06
x17        -5.076e-09   3.37e-06     -0.002      0.999    -6.6e-06    6.59e-06
x18        -1.377e-08    5.5e-06     -0.003      0.998   -1.08e-05    1.08e-05
x19        -6.135e-09   3.69e-06     -0.002      0.999   -7.23e-06    7.22e-06
x20        -1.018e-08   4.43e-06     -0.002      0.998   -8.68e-06    8.66e-06
x21        -6.911e-08   1.21e-05     -0.006      0.995   -2.39e-05    2.37e-05
x22        -5.656e-08    1.1e-05     -0.005      0.996   -2.16e-05    2.15e-05
x23        -5.355e-08   1.07e-05     -0.005      0.996    -2.1e-05    2.09e-05
x24        -3.636e-08   8.85e-06     -0.004      0.997   -1.74e-05    1.73e-05
ma.L1         -1.3899   4.86e-11  -2.86e+10      0.000      -1.390      -1.390
ma.L2          0.4032    4.6e-11   8.76e+09      0.000       0.403       0.403
sigma2      7.526e-11   6.92e-11      1.088      0.277   -6.03e-11    2.11e-10
===================================================================================
Ljung-Box (L1) (Q):                  69.65   Jarque-Bera (JB):           6422892.15
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            12.42
Prob(H) (two-sided):                  0.00   Kurtosis:                       439.89
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.74e+29. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04977, saving model to LSTM8.h5
10/10 - 4s - loss: 1.4372 - val_loss: 0.0498 - lr: 0.0010 - 4s/epoch - 418ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.4260 - val_loss: 0.0503 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.4155 - val_loss: 0.0510 - lr: 0.0010 - 66ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.4050 - val_loss: 0.0516 - lr: 0.0010 - 76ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3941 - val_loss: 0.0523 - lr: 0.0010 - 73ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3829 - val_loss: 0.0530 - lr: 0.0010 - 79ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3749 - val_loss: 0.0531 - lr: 1.0000e-04 - 80ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3737 - val_loss: 0.0532 - lr: 1.0000e-04 - 65ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3725 - val_loss: 0.0533 - lr: 1.0000e-04 - 84ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3714 - val_loss: 0.0533 - lr: 1.0000e-04 - 68ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3702 - val_loss: 0.0534 - lr: 1.0000e-04 - 79ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3694 - val_loss: 0.0534 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3693 - val_loss: 0.0534 - lr: 1.0000e-05 - 85ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3692 - val_loss: 0.0534 - lr: 1.0000e-05 - 85ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3691 - val_loss: 0.0534 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3689 - val_loss: 0.0534 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3688 - val_loss: 0.0534 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3687 - val_loss: 0.0535 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3686 - val_loss: 0.0535 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3685 - val_loss: 0.0535 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3684 - val_loss: 0.0535 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3683 - val_loss: 0.0535 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3681 - val_loss: 0.0535 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3680 - val_loss: 0.0535 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3679 - val_loss: 0.0535 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3678 - val_loss: 0.0535 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3677 - val_loss: 0.0535 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3676 - val_loss: 0.0535 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3675 - val_loss: 0.0535 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3674 - val_loss: 0.0535 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3672 - val_loss: 0.0536 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3671 - val_loss: 0.0536 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3670 - val_loss: 0.0536 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3669 - val_loss: 0.0536 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3668 - val_loss: 0.0536 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3667 - val_loss: 0.0536 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3666 - val_loss: 0.0536 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3664 - val_loss: 0.0536 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3663 - val_loss: 0.0536 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3662 - val_loss: 0.0536 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3661 - val_loss: 0.0536 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3660 - val_loss: 0.0536 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3659 - val_loss: 0.0536 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3658 - val_loss: 0.0536 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3657 - val_loss: 0.0537 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3655 - val_loss: 0.0537 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3654 - val_loss: 0.0537 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3653 - val_loss: 0.0537 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3652 - val_loss: 0.0537 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3651 - val_loss: 0.0537 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04977
10/10 - 0s - loss: 1.3650 - val_loss: 0.0537 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.82392201920814 
RMSE:	 4.880975519218278 
MAPE:	 3.848681389687446

EMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 36.51946703896653 
RMSE:	 6.043133875644865 
MAPE:	 4.745071953502077

WMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 43.8622174139386 
RMSE:	 6.622855684214975 
MAPE:	 5.251112001385055

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 142.0772798190819 
RMSE:	 11.91961743593652 
MAPE:	 10.62322903609799
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16921.943, Time=10.33 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.592, Time=4.80 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16797.275, Time=9.62 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.592, Time=7.38 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16996.465, Time=3.46 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16999.509, Time=3.29 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17171.315, Time=6.42 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-16994.523, Time=4.03 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=-15518.026, Time=31.80 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 81.148 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood                8613.658
Date:                Sun, 12 Dec 2021   AIC                         -17171.315
Time:                        18:32:05   BIC                         -17039.972
Sample:                             0   HQIC                        -17120.874
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          -5.14e-10    7.6e-05  -6.76e-06      1.000      -0.000       0.000
x2         -5.041e-10   7.52e-05   -6.7e-06      1.000      -0.000       0.000
x3         -4.834e-10   7.38e-05  -6.55e-06      1.000      -0.000       0.000
x4             1.0000   7.46e-05   1.34e+04      0.000       1.000       1.000
x5         -4.462e-10   7.09e-05  -6.29e-06      1.000      -0.000       0.000
x6         -3.064e-09      0.000  -1.84e-05      1.000      -0.000       0.000
x7         -4.751e-10   7.35e-05  -6.46e-06      1.000      -0.000       0.000
x8         -4.628e-10   7.28e-05  -6.36e-06      1.000      -0.000       0.000
x9          -9.21e-11   9.37e-06  -9.83e-06      1.000   -1.84e-05    1.84e-05
x10        -2.165e-10    3.1e-05  -6.98e-06      1.000   -6.08e-05    6.08e-05
x11        -4.665e-10   7.28e-05  -6.41e-06      1.000      -0.000       0.000
x12         -4.62e-10   7.23e-05  -6.39e-06      1.000      -0.000       0.000
x13        -4.906e-10   7.43e-05   -6.6e-06      1.000      -0.000       0.000
x14        -3.985e-09      0.000  -1.87e-05      1.000      -0.000       0.000
x15        -4.897e-10   7.48e-05  -6.55e-06      1.000      -0.000       0.000
x16        -7.327e-10   9.24e-05  -7.93e-06      1.000      -0.000       0.000
x17        -4.173e-10   6.93e-05  -6.02e-06      1.000      -0.000       0.000
x18        -3.397e-10   6.02e-05  -5.64e-06      1.000      -0.000       0.000
x19        -6.012e-10    8.3e-05  -7.25e-06      1.000      -0.000       0.000
x20         -9.09e-10      0.000  -9.05e-06      1.000      -0.000       0.000
x21        -6.188e-09      0.000  -2.32e-05      1.000      -0.001       0.001
x22        -1.992e-09      0.000  -1.33e-05      1.000      -0.000       0.000
x23        -3.669e-09      0.000  -1.79e-05      1.000      -0.000       0.000
x24        -1.065e-09      0.000  -1.01e-05      1.000      -0.000       0.000
ar.L1         -1.2073   5.73e-10  -2.11e+09      0.000      -1.207      -1.207
ar.L2         -0.9083   5.93e-10  -1.53e+09      0.000      -0.908      -0.908
ar.L3         -0.4033   5.84e-10  -6.91e+08      0.000      -0.403      -0.403
sigma2       8.06e-11   6.94e-11      1.162      0.245   -5.54e-11    2.17e-10
===================================================================================
Ljung-Box (L1) (Q):                  13.77   Jarque-Bera (JB):           2436796.68
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             4.07
Prob(H) (two-sided):                  0.00   Kurtosis:                       272.41
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.23e+28. Standard errors may be unstable.
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05006, saving model to LSTM8.h5
45/45 - 4s - loss: 1.4094 - val_loss: 0.0501 - lr: 0.0010 - 4s/epoch - 86ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05006
45/45 - 0s - loss: 1.3333 - val_loss: 0.0534 - lr: 0.0010 - 263ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05006
45/45 - 0s - loss: 1.2221 - val_loss: 0.0572 - lr: 0.0010 - 253ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05006
45/45 - 0s - loss: 1.1381 - val_loss: 0.0608 - lr: 0.0010 - 279ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05006
45/45 - 0s - loss: 1.0759 - val_loss: 0.0644 - lr: 0.0010 - 254ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05006
45/45 - 0s - loss: 1.0236 - val_loss: 0.0684 - lr: 0.0010 - 269ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9942 - val_loss: 0.0689 - lr: 1.0000e-04 - 255ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9898 - val_loss: 0.0693 - lr: 1.0000e-04 - 264ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9855 - val_loss: 0.0698 - lr: 1.0000e-04 - 218ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9812 - val_loss: 0.0703 - lr: 1.0000e-04 - 231ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9770 - val_loss: 0.0707 - lr: 1.0000e-04 - 300ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9744 - val_loss: 0.0708 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9740 - val_loss: 0.0708 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9736 - val_loss: 0.0709 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9731 - val_loss: 0.0710 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9727 - val_loss: 0.0710 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9723 - val_loss: 0.0711 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9719 - val_loss: 0.0711 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9715 - val_loss: 0.0712 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9710 - val_loss: 0.0712 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9706 - val_loss: 0.0713 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9702 - val_loss: 0.0713 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9698 - val_loss: 0.0714 - lr: 1.0000e-05 - 260ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9694 - val_loss: 0.0715 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9689 - val_loss: 0.0715 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9685 - val_loss: 0.0716 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9681 - val_loss: 0.0716 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9677 - val_loss: 0.0717 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9672 - val_loss: 0.0718 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9668 - val_loss: 0.0718 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9664 - val_loss: 0.0719 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9660 - val_loss: 0.0719 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9655 - val_loss: 0.0720 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9651 - val_loss: 0.0721 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9647 - val_loss: 0.0721 - lr: 1.0000e-05 - 285ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9643 - val_loss: 0.0722 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9638 - val_loss: 0.0723 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9634 - val_loss: 0.0723 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9630 - val_loss: 0.0724 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9626 - val_loss: 0.0725 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9621 - val_loss: 0.0725 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9617 - val_loss: 0.0726 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9613 - val_loss: 0.0727 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9609 - val_loss: 0.0727 - lr: 1.0000e-05 - 282ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9605 - val_loss: 0.0728 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9600 - val_loss: 0.0729 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9596 - val_loss: 0.0729 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9592 - val_loss: 0.0730 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9588 - val_loss: 0.0731 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9583 - val_loss: 0.0731 - lr: 1.0000e-05 - 297ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05006
45/45 - 0s - loss: 0.9579 - val_loss: 0.0732 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.82392201920814 
RMSE:	 4.880975519218278 
MAPE:	 3.848681389687446

EMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 36.51946703896653 
RMSE:	 6.043133875644865 
MAPE:	 4.745071953502077

WMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 43.8622174139386 
RMSE:	 6.622855684214975 
MAPE:	 5.251112001385055

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 142.0772798190819 
RMSE:	 11.91961743593652 
MAPE:	 10.62322903609799

KAMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	50.0% Accuracy
MSE:	 22.038946216129663 
RMSE:	 4.694565604625168 
MAPE:	 3.7548184287573756
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16999.768, Time=3.13 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14568.591, Time=4.75 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15581.065, Time=9.16 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14566.591, Time=7.15 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16536.628, Time=9.56 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-13971.493, Time=10.03 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17226.044, Time=22.66 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14562.591, Time=9.61 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16754.945, Time=20.07 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-15001.855, Time=20.68 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 116.804 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8640.022
Date:                Sun, 12 Dec 2021   AIC                         -17226.044
Time:                        18:35:59   BIC                         -17099.391
Sample:                             0   HQIC                        -17177.404
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.031e-09   1.06e-05     -0.000      1.000   -2.08e-05    2.08e-05
x2          -4.99e-09   8.12e-06     -0.001      1.000   -1.59e-05    1.59e-05
x3         -5.114e-09   1.38e-05     -0.000      1.000    -2.7e-05     2.7e-05
x4             1.0000   8.91e-06   1.12e+05      0.000       1.000       1.000
x5          -4.55e-09    8.2e-06     -0.001      1.000   -1.61e-05    1.61e-05
x6         -9.992e-08      0.001     -0.000      1.000      -0.002       0.002
x7         -4.607e-09   1.97e-05     -0.000      1.000   -3.86e-05    3.86e-05
x8         -4.591e-09   1.77e-05     -0.000      1.000   -3.48e-05    3.48e-05
x9         -2.538e-09   1.13e-05     -0.000      1.000   -2.21e-05    2.21e-05
x10        -4.315e-09   6.08e-06     -0.001      0.999   -1.19e-05    1.19e-05
x11        -4.545e-09   1.62e-05     -0.000      1.000   -3.18e-05    3.18e-05
x12        -4.701e-09   1.97e-05     -0.000      1.000   -3.87e-05    3.87e-05
x13        -4.823e-09   1.18e-05     -0.000      1.000    -2.3e-05     2.3e-05
x14         -4.08e-08   4.99e-05     -0.001      0.999   -9.79e-05    9.78e-05
x15        -5.557e-09   2.03e-05     -0.000      1.000   -3.99e-05    3.99e-05
x16        -3.541e-09    1.3e-05     -0.000      1.000   -2.55e-05    2.55e-05
x17        -3.463e-09   1.51e-05     -0.000      1.000   -2.97e-05    2.97e-05
x18        -1.534e-08      4e-05     -0.000      1.000   -7.85e-05    7.85e-05
x19        -6.118e-09   2.07e-05     -0.000      1.000   -4.05e-05    4.05e-05
x20        -1.581e-08   3.38e-05     -0.000      1.000   -6.62e-05    6.61e-05
x21        -5.505e-08    5.6e-05     -0.001      0.999      -0.000       0.000
x22        -2.936e-08   4.55e-05     -0.001      0.999   -8.92e-05    8.92e-05
x23        -3.882e-08   4.89e-05     -0.001      0.999   -9.58e-05    9.57e-05
x24        -2.099e-08   4.87e-05     -0.000      1.000   -9.54e-05    9.54e-05
ma.L1         -1.3900   1.23e-07  -1.13e+07      0.000      -1.390      -1.390
ma.L2          0.4044   1.43e-07   2.82e+06      0.000       0.404       0.404
sigma2      7.525e-11   7.22e-11      1.042      0.297   -6.63e-11    2.17e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.84   Jarque-Bera (JB):           1335305.59
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.09   Skew:                             5.74
Prob(H) (two-sided):                  0.00   Kurtosis:                       202.19
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.77e+23. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05306, saving model to LSTM8.h5
58/58 - 4s - loss: 1.4273 - val_loss: 0.0531 - lr: 0.0010 - 4s/epoch - 67ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.05306 to 0.05231, saving model to LSTM8.h5
58/58 - 0s - loss: 1.3303 - val_loss: 0.0523 - lr: 0.0010 - 293ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05231
58/58 - 0s - loss: 1.1533 - val_loss: 0.0582 - lr: 0.0010 - 306ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.9755 - val_loss: 0.0665 - lr: 0.0010 - 310ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.8640 - val_loss: 0.0698 - lr: 0.0010 - 299ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7991 - val_loss: 0.0731 - lr: 0.0010 - 333ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7638 - val_loss: 0.0768 - lr: 0.0010 - 310ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7472 - val_loss: 0.0771 - lr: 1.0000e-04 - 303ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7448 - val_loss: 0.0775 - lr: 1.0000e-04 - 294ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7424 - val_loss: 0.0780 - lr: 1.0000e-04 - 304ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7400 - val_loss: 0.0784 - lr: 1.0000e-04 - 310ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7376 - val_loss: 0.0788 - lr: 1.0000e-04 - 298ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7362 - val_loss: 0.0789 - lr: 1.0000e-05 - 302ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7359 - val_loss: 0.0789 - lr: 1.0000e-05 - 299ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7357 - val_loss: 0.0790 - lr: 1.0000e-05 - 332ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7355 - val_loss: 0.0790 - lr: 1.0000e-05 - 300ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7352 - val_loss: 0.0791 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7350 - val_loss: 0.0792 - lr: 1.0000e-05 - 283ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7347 - val_loss: 0.0792 - lr: 1.0000e-05 - 297ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7345 - val_loss: 0.0793 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7342 - val_loss: 0.0793 - lr: 1.0000e-05 - 321ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7339 - val_loss: 0.0794 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7337 - val_loss: 0.0794 - lr: 1.0000e-05 - 332ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7334 - val_loss: 0.0795 - lr: 1.0000e-05 - 326ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7331 - val_loss: 0.0796 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7329 - val_loss: 0.0796 - lr: 1.0000e-05 - 377ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7326 - val_loss: 0.0797 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7323 - val_loss: 0.0798 - lr: 1.0000e-05 - 289ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7321 - val_loss: 0.0798 - lr: 1.0000e-05 - 319ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7318 - val_loss: 0.0799 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7315 - val_loss: 0.0800 - lr: 1.0000e-05 - 302ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7313 - val_loss: 0.0801 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7310 - val_loss: 0.0801 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7307 - val_loss: 0.0802 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7304 - val_loss: 0.0803 - lr: 1.0000e-05 - 305ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7301 - val_loss: 0.0804 - lr: 1.0000e-05 - 391ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7299 - val_loss: 0.0805 - lr: 1.0000e-05 - 305ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7296 - val_loss: 0.0805 - lr: 1.0000e-05 - 301ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7293 - val_loss: 0.0806 - lr: 1.0000e-05 - 320ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7290 - val_loss: 0.0807 - lr: 1.0000e-05 - 315ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7288 - val_loss: 0.0808 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7285 - val_loss: 0.0809 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7282 - val_loss: 0.0810 - lr: 1.0000e-05 - 324ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7279 - val_loss: 0.0811 - lr: 1.0000e-05 - 305ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7276 - val_loss: 0.0811 - lr: 1.0000e-05 - 295ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7274 - val_loss: 0.0812 - lr: 1.0000e-05 - 303ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7271 - val_loss: 0.0813 - lr: 1.0000e-05 - 305ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7268 - val_loss: 0.0814 - lr: 1.0000e-05 - 328ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7265 - val_loss: 0.0815 - lr: 1.0000e-05 - 369ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7262 - val_loss: 0.0816 - lr: 1.0000e-05 - 301ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7260 - val_loss: 0.0817 - lr: 1.0000e-05 - 330ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.05231
58/58 - 0s - loss: 0.7257 - val_loss: 0.0818 - lr: 1.0000e-05 - 306ms/epoch - 5ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.82392201920814 
RMSE:	 4.880975519218278 
MAPE:	 3.848681389687446

EMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 36.51946703896653 
RMSE:	 6.043133875644865 
MAPE:	 4.745071953502077

WMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 43.8622174139386 
RMSE:	 6.622855684214975 
MAPE:	 5.251112001385055

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 142.0772798190819 
RMSE:	 11.91961743593652 
MAPE:	 10.62322903609799

KAMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	50.0% Accuracy
MSE:	 22.038946216129663 
RMSE:	 4.694565604625168 
MAPE:	 3.7548184287573756

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 18.4805779796863 
RMSE:	 4.298904276636815 
MAPE:	 3.422385888494848
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17000.569, Time=3.53 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-15576.554, Time=5.81 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16078.305, Time=8.31 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-15574.554, Time=9.30 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16998.627, Time=3.30 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16429.916, Time=12.90 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17000.664, Time=3.34 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-15700.026, Time=11.65 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-15704.282, Time=15.57 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-16998.664, Time=3.35 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 77.090 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8527.332
Date:                Sun, 12 Dec 2021   AIC                         -17000.664
Time:                        18:41:53   BIC                         -16874.011
Sample:                             0   HQIC                        -16952.024
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          8.378e-14   2.16e-06   3.89e-08      1.000   -4.23e-06    4.23e-06
x2          7.457e-14   2.15e-06   3.47e-08      1.000   -4.22e-06    4.22e-06
x3          2.279e-14   2.16e-06   1.05e-08      1.000   -4.24e-06    4.24e-06
x4             1.0000   2.16e-06   4.63e+05      0.000       1.000       1.000
x5          1.211e-12   2.07e-06   5.86e-07      1.000   -4.05e-06    4.05e-06
x6          3.146e-15   2.67e-06   1.18e-09      1.000   -5.23e-06    5.23e-06
x7          1.593e-13   2.15e-06   7.41e-08      1.000   -4.21e-06    4.21e-06
x8            -0.0001    2.1e-06    -48.778      0.000      -0.000   -9.82e-05
x9          5.141e-14   6.35e-07    8.1e-08      1.000   -1.24e-06    1.24e-06
x10        -6.174e-05   1.34e-06    -45.995      0.000   -6.44e-05   -5.91e-05
x11            0.0003   2.15e-06    148.354      0.000       0.000       0.000
x12           -0.0002   2.02e-06    -93.730      0.000      -0.000      -0.000
x13         1.967e-14   2.16e-06    9.1e-09      1.000   -4.23e-06    4.23e-06
x14        -1.297e-14   5.65e-06  -2.29e-09      1.000   -1.11e-05    1.11e-05
x15         -3.18e-12   1.82e-06  -1.75e-06      1.000   -3.57e-06    3.57e-06
x16        -1.426e-12   4.51e-06  -3.16e-07      1.000   -8.84e-06    8.84e-06
x17         7.474e-13   2.37e-06   3.16e-07      1.000   -4.64e-06    4.64e-06
x18         -2.92e-13    2.9e-06  -1.01e-07      1.000   -5.68e-06    5.68e-06
x19        -4.211e-14   1.89e-06  -2.22e-08      1.000   -3.71e-06    3.71e-06
x20        -1.515e-13    1.2e-06  -1.26e-07      1.000   -2.36e-06    2.36e-06
x21         6.555e-13   6.37e-06   1.03e-07      1.000   -1.25e-05    1.25e-05
x22         1.212e-14   6.19e-06   1.96e-09      1.000   -1.21e-05    1.21e-05
x23        -3.877e-13   3.76e-06  -1.03e-07      1.000   -7.38e-06    7.38e-06
x24         8.127e-15   4.01e-06   2.03e-09      1.000   -7.86e-06    7.86e-06
ma.L1         -1.3370   3.84e-12  -3.48e+11      0.000      -1.337      -1.337
ma.L2          0.4289   1.65e-12    2.6e+11      0.000       0.429       0.429
sigma2          1e-10   6.99e-11      1.430      0.153   -3.71e-11    2.37e-10
===================================================================================
Ljung-Box (L1) (Q):                   4.57   Jarque-Bera (JB):           3228712.87
Prob(Q):                              0.03   Prob(JB):                         0.00
Heteroskedasticity (H):               0.12   Skew:                            -9.87
Prob(H) (two-sided):                  0.00   Kurtosis:                       312.63
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.57e+30. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04608, saving model to LSTM8.h5
43/43 - 4s - loss: 1.3504 - val_loss: 0.0461 - lr: 0.0010 - 4s/epoch - 102ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04608
43/43 - 0s - loss: 1.2322 - val_loss: 0.0478 - lr: 0.0010 - 225ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04608
43/43 - 0s - loss: 1.1310 - val_loss: 0.0496 - lr: 0.0010 - 248ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04608
43/43 - 0s - loss: 1.0501 - val_loss: 0.0517 - lr: 0.0010 - 243ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.9843 - val_loss: 0.0543 - lr: 0.0010 - 252ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.9252 - val_loss: 0.0580 - lr: 0.0010 - 225ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8901 - val_loss: 0.0584 - lr: 1.0000e-04 - 243ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8856 - val_loss: 0.0588 - lr: 1.0000e-04 - 260ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8812 - val_loss: 0.0592 - lr: 1.0000e-04 - 278ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8770 - val_loss: 0.0596 - lr: 1.0000e-04 - 226ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8730 - val_loss: 0.0600 - lr: 1.0000e-04 - 228ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8706 - val_loss: 0.0600 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8702 - val_loss: 0.0601 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8698 - val_loss: 0.0601 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8694 - val_loss: 0.0602 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8690 - val_loss: 0.0602 - lr: 1.0000e-05 - 237ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8686 - val_loss: 0.0603 - lr: 1.0000e-05 - 244ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8682 - val_loss: 0.0603 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8678 - val_loss: 0.0604 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8674 - val_loss: 0.0604 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8670 - val_loss: 0.0605 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8666 - val_loss: 0.0605 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8662 - val_loss: 0.0606 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8658 - val_loss: 0.0606 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8654 - val_loss: 0.0607 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8650 - val_loss: 0.0607 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8646 - val_loss: 0.0608 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8642 - val_loss: 0.0608 - lr: 1.0000e-05 - 239ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8638 - val_loss: 0.0609 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8634 - val_loss: 0.0609 - lr: 1.0000e-05 - 247ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8630 - val_loss: 0.0610 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8627 - val_loss: 0.0610 - lr: 1.0000e-05 - 245ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8623 - val_loss: 0.0611 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8619 - val_loss: 0.0612 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8615 - val_loss: 0.0612 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8611 - val_loss: 0.0613 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8607 - val_loss: 0.0613 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8603 - val_loss: 0.0614 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8599 - val_loss: 0.0614 - lr: 1.0000e-05 - 285ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8595 - val_loss: 0.0615 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8591 - val_loss: 0.0616 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8587 - val_loss: 0.0616 - lr: 1.0000e-05 - 245ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8583 - val_loss: 0.0617 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8579 - val_loss: 0.0618 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8575 - val_loss: 0.0618 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8571 - val_loss: 0.0619 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8567 - val_loss: 0.0619 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8563 - val_loss: 0.0620 - lr: 1.0000e-05 - 247ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8559 - val_loss: 0.0621 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8555 - val_loss: 0.0621 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04608
43/43 - 0s - loss: 0.8552 - val_loss: 0.0622 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.82392201920814 
RMSE:	 4.880975519218278 
MAPE:	 3.848681389687446

EMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 36.51946703896653 
RMSE:	 6.043133875644865 
MAPE:	 4.745071953502077

WMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 43.8622174139386 
RMSE:	 6.622855684214975 
MAPE:	 5.251112001385055

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 142.0772798190819 
RMSE:	 11.91961743593652 
MAPE:	 10.62322903609799

KAMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	50.0% Accuracy
MSE:	 22.038946216129663 
RMSE:	 4.694565604625168 
MAPE:	 3.7548184287573756

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 18.4805779796863 
RMSE:	 4.298904276636815 
MAPE:	 3.422385888494848

T3
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 77.67949047095865 
RMSE:	 8.813596908808496 
MAPE:	 7.190279750268759
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16762.799, Time=5.08 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14158.507, Time=2.96 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16445.598, Time=9.14 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-16144.282, Time=11.83 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15275.101, Time=9.02 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15897.090, Time=13.05 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16446.973, Time=9.39 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16567.628, Time=3.59 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16523.926, Time=3.73 sec
 ARIMA(1,3,1)(0,0,0)[0] intercept   : AIC=-16696.008, Time=3.66 sec

Best model:  ARIMA(1,3,1)(0,0,0)[0]          
Total fit time: 71.466 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 1)   Log Likelihood                8408.400
Date:                Sun, 12 Dec 2021   AIC                         -16762.799
Time:                        18:47:46   BIC                         -16636.147
Sample:                             0   HQIC                        -16714.159
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.289e-07      0.001     -0.000      1.000      -0.002       0.002
x2         -5.288e-07      0.001     -0.001      0.999      -0.002       0.002
x3         -5.306e-07      0.001     -0.000      1.000      -0.002       0.002
x4             1.0000      0.000   2045.695      0.000       0.999       1.001
x5         -5.041e-07      0.000     -0.001      0.999      -0.001       0.001
x6         -9.879e-07   4.33e-05     -0.023      0.982   -8.58e-05    8.38e-05
x7         -5.185e-07      0.001     -0.001      0.999      -0.001       0.001
x8             0.0001      0.000      0.643      0.520      -0.000       0.001
x9          9.794e-08      0.001      0.000      1.000      -0.001       0.001
x10            0.0001      0.000      0.313      0.754      -0.001       0.001
x11           -0.0004      0.000     -2.284      0.022      -0.001   -6.06e-05
x12            0.0005      0.000      2.453      0.014       0.000       0.001
x13        -5.277e-07      0.000     -0.002      0.999      -0.001       0.001
x14        -1.566e-06      0.000     -0.005      0.996      -0.001       0.001
x15        -5.136e-07   9.86e-05     -0.005      0.996      -0.000       0.000
x16         -7.66e-07      0.000     -0.002      0.999      -0.001       0.001
x17        -5.146e-07      0.000     -0.003      0.998      -0.000       0.000
x18        -1.701e-07      0.001     -0.000      1.000      -0.001       0.001
x19         -5.77e-07   8.54e-05     -0.007      0.995      -0.000       0.000
x20         5.026e-07      0.001      0.001      0.999      -0.001       0.001
x21        -2.058e-06      0.000     -0.010      0.992      -0.000       0.000
x22        -1.098e-06      0.001     -0.001      0.999      -0.003       0.003
x23        -1.472e-06      0.001     -0.003      0.998      -0.001       0.001
x24        -8.255e-07      0.001     -0.001      0.999      -0.002       0.002
ar.L1         -0.2866   3.63e-05  -7897.273      0.000      -0.287      -0.287
ma.L1         -0.9124   1.46e-06  -6.25e+05      0.000      -0.912      -0.912
sigma2       9.98e-11   7.23e-11      1.380      0.168    -4.2e-11    2.42e-10
===================================================================================
Ljung-Box (L1) (Q):                  83.51   Jarque-Bera (JB):           4742889.91
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            -5.71
Prob(H) (two-sided):                  0.00   Kurtosis:                       378.86
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.2e+22. Standard errors may be unstable.
ARIMA order: (1, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04686, saving model to LSTM8.h5
90/90 - 4s - loss: 1.2088 - val_loss: 0.0469 - lr: 0.0010 - 4s/epoch - 45ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04686
90/90 - 0s - loss: 1.0094 - val_loss: 0.0553 - lr: 0.0010 - 456ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.8756 - val_loss: 0.0627 - lr: 0.0010 - 484ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.7962 - val_loss: 0.0702 - lr: 0.0010 - 477ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.7458 - val_loss: 0.0777 - lr: 0.0010 - 543ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.7107 - val_loss: 0.0852 - lr: 0.0010 - 549ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6938 - val_loss: 0.0859 - lr: 1.0000e-04 - 477ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6912 - val_loss: 0.0867 - lr: 1.0000e-04 - 532ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6886 - val_loss: 0.0876 - lr: 1.0000e-04 - 472ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6860 - val_loss: 0.0884 - lr: 1.0000e-04 - 564ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6834 - val_loss: 0.0893 - lr: 1.0000e-04 - 473ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6818 - val_loss: 0.0894 - lr: 1.0000e-05 - 518ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6815 - val_loss: 0.0895 - lr: 1.0000e-05 - 554ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6812 - val_loss: 0.0896 - lr: 1.0000e-05 - 458ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6810 - val_loss: 0.0897 - lr: 1.0000e-05 - 443ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6807 - val_loss: 0.0899 - lr: 1.0000e-05 - 451ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6804 - val_loss: 0.0900 - lr: 1.0000e-05 - 469ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6801 - val_loss: 0.0901 - lr: 1.0000e-05 - 451ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6798 - val_loss: 0.0902 - lr: 1.0000e-05 - 452ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6795 - val_loss: 0.0903 - lr: 1.0000e-05 - 431ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6791 - val_loss: 0.0905 - lr: 1.0000e-05 - 548ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6788 - val_loss: 0.0906 - lr: 1.0000e-05 - 443ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6785 - val_loss: 0.0907 - lr: 1.0000e-05 - 539ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6782 - val_loss: 0.0909 - lr: 1.0000e-05 - 528ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6778 - val_loss: 0.0910 - lr: 1.0000e-05 - 453ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6775 - val_loss: 0.0911 - lr: 1.0000e-05 - 506ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6772 - val_loss: 0.0913 - lr: 1.0000e-05 - 455ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6768 - val_loss: 0.0914 - lr: 1.0000e-05 - 514ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6765 - val_loss: 0.0916 - lr: 1.0000e-05 - 478ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6762 - val_loss: 0.0917 - lr: 1.0000e-05 - 514ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6758 - val_loss: 0.0919 - lr: 1.0000e-05 - 535ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6755 - val_loss: 0.0921 - lr: 1.0000e-05 - 432ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6751 - val_loss: 0.0922 - lr: 1.0000e-05 - 543ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6748 - val_loss: 0.0924 - lr: 1.0000e-05 - 454ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6744 - val_loss: 0.0925 - lr: 1.0000e-05 - 481ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6741 - val_loss: 0.0927 - lr: 1.0000e-05 - 533ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6738 - val_loss: 0.0929 - lr: 1.0000e-05 - 574ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6734 - val_loss: 0.0930 - lr: 1.0000e-05 - 498ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6731 - val_loss: 0.0932 - lr: 1.0000e-05 - 486ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6727 - val_loss: 0.0934 - lr: 1.0000e-05 - 467ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6724 - val_loss: 0.0936 - lr: 1.0000e-05 - 540ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6720 - val_loss: 0.0937 - lr: 1.0000e-05 - 577ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6717 - val_loss: 0.0939 - lr: 1.0000e-05 - 461ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6713 - val_loss: 0.0941 - lr: 1.0000e-05 - 458ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6710 - val_loss: 0.0943 - lr: 1.0000e-05 - 541ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6706 - val_loss: 0.0945 - lr: 1.0000e-05 - 549ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6703 - val_loss: 0.0947 - lr: 1.0000e-05 - 512ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6699 - val_loss: 0.0948 - lr: 1.0000e-05 - 525ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6696 - val_loss: 0.0950 - lr: 1.0000e-05 - 460ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04686
90/90 - 1s - loss: 0.6692 - val_loss: 0.0952 - lr: 1.0000e-05 - 558ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04686
90/90 - 0s - loss: 0.6689 - val_loss: 0.0954 - lr: 1.0000e-05 - 450ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.82392201920814 
RMSE:	 4.880975519218278 
MAPE:	 3.848681389687446

EMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 36.51946703896653 
RMSE:	 6.043133875644865 
MAPE:	 4.745071953502077

WMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 43.8622174139386 
RMSE:	 6.622855684214975 
MAPE:	 5.251112001385055

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 142.0772798190819 
RMSE:	 11.91961743593652 
MAPE:	 10.62322903609799

KAMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	50.0% Accuracy
MSE:	 22.038946216129663 
RMSE:	 4.694565604625168 
MAPE:	 3.7548184287573756

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 18.4805779796863 
RMSE:	 4.298904276636815 
MAPE:	 3.422385888494848

T3
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 77.67949047095865 
RMSE:	 8.813596908808496 
MAPE:	 7.190279750268759

TEMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 31.990355699152573 
RMSE:	 5.656001741438255 
MAPE:	 5.124316314221195
Runtime: mins: 54.23975520736664

Architecture Used

In [140]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment8.png to Experiment8 (1).png
In [143]:
img = cv2.imread('Experiment8.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[143]:
<matplotlib.image.AxesImage at 0x7f75c4377150>

Model Plots

In [112]:
with open('simulation8_data.json') as json_file:
    simulation8 = json.load(json_file)
fileimg = 'Experiment8'
In [113]:
for i in range(len(list(simulation8.keys()))):
  SIM = list(simulation8.keys())[i]
  plot_train(simulation8,SIM)
  plot_test(simulation8,SIM)
----- Train RMSE for SMA ----- 18.846065584784647
----- Train_MSE_LSTM for SMA ----- 355.1741880260043
----- Train MAE LSTM for SMA ----- 18.820755524210412
----- Test RMSE for SMA----- 4.880975519218278
----- Test_MSE_LSTM for SMA----- 23.82392201920814
----- Test_MAE_LSTM for SMA----- 3.848681389687446
----- Train RMSE for EMA ----- 24.364485496157467
----- Train_MSE_LSTM for EMA ----- 593.6281534924675
----- Train MAE LSTM for EMA ----- 24.35965843720011
----- Test RMSE for EMA----- 6.043133875644865
----- Test_MSE_LSTM for EMA----- 36.51946703896653
----- Test_MAE_LSTM for EMA----- 4.745071953502077
----- Train RMSE for WMA ----- 22.522819506061108
----- Train_MSE_LSTM for WMA ----- 507.27739850260673
----- Train MAE LSTM for WMA ----- 22.511068136385173
----- Test RMSE for WMA----- 6.622855684214975
----- Test_MSE_LSTM for WMA----- 43.8622174139386
----- Test_MAE_LSTM for WMA----- 5.251112001385055
----- Train RMSE for DEMA ----- 28.644253182641428
----- Train_MSE_LSTM for DEMA ----- 820.4932403912636
----- Train MAE LSTM for DEMA ----- 28.644042274739483
----- Test RMSE for DEMA----- 11.91961743593652
----- Test_MSE_LSTM for DEMA----- 142.0772798190819
----- Test_MAE_LSTM for DEMA----- 10.62322903609799
----- Train RMSE for KAMA ----- 18.988700576595924
----- Train_MSE_LSTM for KAMA ----- 360.5707495876144
----- Train MAE LSTM for KAMA ----- 18.970468660392385
----- Test RMSE for KAMA----- 4.694565604625168
----- Test_MSE_LSTM for KAMA----- 22.038946216129663
----- Test_MAE_LSTM for KAMA----- 3.7548184287573756
----- Train RMSE for MIDPOINT ----- 16.06235031499391
----- Train_MSE_LSTM for MIDPOINT ----- 257.999097641585
----- Train MAE LSTM for MIDPOINT ----- 15.981051393074564
----- Test RMSE for MIDPOINT----- 4.298904276636815
----- Test_MSE_LSTM for MIDPOINT----- 18.4805779796863
----- Test_MAE_LSTM for MIDPOINT----- 3.422385888494848
----- Train RMSE for T3 ----- 20.284725351435558
----- Train_MSE_LSTM for T3 ----- 411.47008258317237
----- Train MAE LSTM for T3 ----- 20.220124100694562
----- Test RMSE for T3----- 8.813596908808496
----- Test_MSE_LSTM for T3----- 77.67949047095865
----- Test_MAE_LSTM for T3----- 7.190279750268759
----- Train RMSE for TEMA ----- 18.95244476174229
----- Train_MSE_LSTM for TEMA ----- 359.19516244689277
----- Train MAE LSTM for TEMA ----- 18.91601826176785
----- Test RMSE for TEMA----- 5.656001741438255
----- Test_MSE_LSTM for TEMA----- 31.990355699152573
----- Test_MAE_LSTM for TEMA----- 5.124316314221195

List of RMSE, MSE & MAE scores for Test data

In [5]:
import json
with open('simulation1_data.json') as json_file:
    simulation1 = json.load(json_file)

with open('simulation2_data.json') as json_file:
    simulation2 = json.load(json_file)

with open('simulation3_data.json') as json_file:
    simulation3 = json.load(json_file)

with open('simulation4_data.json') as json_file:
    simulation4 = json.load(json_file)

with open('simulation5_data.json') as json_file:
    simulation5 = json.load(json_file)

with open('simulation6_data.json') as json_file:
    simulation6 = json.load(json_file)

with open('simulation7_data.json') as json_file:
    simulation7 = json.load(json_file)

with open('simulation8_data.json') as json_file:
    simulation8 = json.load(json_file)
In [6]:
text = 'Stock with Sentiment '
simulations = [simulation1,simulation2,simulation3,simulation4,simulation5,simulation6,simulation7,simulation8]
for i,simulation in enumerate(simulations):
  for ma in simulation.keys():
    print(text+'Experiment ',i+1,' for MA :',ma,'the MSE  is: ',simulation[ma]['final']['mse'])
    print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
    print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
Stock with Sentiment Experiment  1  for MA : SMA the MSE  is:  39.29635655476263
Stock with Sentiment Experiment  1  for MA : SMA the RMSE is:  6.268680607174258
Stock with Sentiment Experiment  1  for MA : SMA the MAE is:  5.07416987111494
Stock with Sentiment Experiment  1  for MA : EMA the MSE  is:  30.25057164247689
Stock with Sentiment Experiment  1  for MA : EMA the RMSE is:  5.50005196725239
Stock with Sentiment Experiment  1  for MA : EMA the MAE is:  4.444486270049439
Stock with Sentiment Experiment  1  for MA : WMA the MSE  is:  64.92643731055051
Stock with Sentiment Experiment  1  for MA : WMA the RMSE is:  8.057694292448089
Stock with Sentiment Experiment  1  for MA : WMA the MAE is:  6.289003841800032
Stock with Sentiment Experiment  1  for MA : DEMA the MSE  is:  52.98651621196421
Stock with Sentiment Experiment  1  for MA : DEMA the RMSE is:  7.279183760008
Stock with Sentiment Experiment  1  for MA : DEMA the MAE is:  5.725540843661134
Stock with Sentiment Experiment  1  for MA : KAMA the MSE  is:  33.6026987861483
Stock with Sentiment Experiment  1  for MA : KAMA the RMSE is:  5.796783486223054
Stock with Sentiment Experiment  1  for MA : KAMA the MAE is:  4.518981487962124
Stock with Sentiment Experiment  1  for MA : MIDPOINT the MSE  is:  22.7414116550439
Stock with Sentiment Experiment  1  for MA : MIDPOINT the RMSE is:  4.768795618921396
Stock with Sentiment Experiment  1  for MA : MIDPOINT the MAE is:  3.944458615157319
Stock with Sentiment Experiment  1  for MA : T3 the MSE  is:  71.14601272964887
Stock with Sentiment Experiment  1  for MA : T3 the RMSE is:  8.434809584670473
Stock with Sentiment Experiment  1  for MA : T3 the MAE is:  6.848574357624394
Stock with Sentiment Experiment  1  for MA : TEMA the MSE  is:  26.608302345887367
Stock with Sentiment Experiment  1  for MA : TEMA the RMSE is:  5.158323598407468
Stock with Sentiment Experiment  1  for MA : TEMA the MAE is:  4.336839144940602
Stock with Sentiment Experiment  2  for MA : SMA the MSE  is:  39.87524298067742
Stock with Sentiment Experiment  2  for MA : SMA the RMSE is:  6.3146847095225125
Stock with Sentiment Experiment  2  for MA : SMA the MAE is:  5.088204561858354
Stock with Sentiment Experiment  2  for MA : EMA the MSE  is:  73.6938994303845
Stock with Sentiment Experiment  2  for MA : EMA the RMSE is:  8.584515095821342
Stock with Sentiment Experiment  2  for MA : EMA the MAE is:  7.207683534137507
Stock with Sentiment Experiment  2  for MA : WMA the MSE  is:  76.90233333404232
Stock with Sentiment Experiment  2  for MA : WMA the RMSE is:  8.76939754681257
Stock with Sentiment Experiment  2  for MA : WMA the MAE is:  7.121360770950225
Stock with Sentiment Experiment  2  for MA : DEMA the MSE  is:  131.0602137141292
Stock with Sentiment Experiment  2  for MA : DEMA the RMSE is:  11.448153288374908
Stock with Sentiment Experiment  2  for MA : DEMA the MAE is:  10.329401784343453
Stock with Sentiment Experiment  2  for MA : KAMA the MSE  is:  42.128331463904196
Stock with Sentiment Experiment  2  for MA : KAMA the RMSE is:  6.49063413418937
Stock with Sentiment Experiment  2  for MA : KAMA the MAE is:  5.225311201416733
Stock with Sentiment Experiment  2  for MA : MIDPOINT the MSE  is:  102.72361049298229
Stock with Sentiment Experiment  2  for MA : MIDPOINT the RMSE is:  10.135265684380567
Stock with Sentiment Experiment  2  for MA : MIDPOINT the MAE is:  8.372939983805384
Stock with Sentiment Experiment  2  for MA : T3 the MSE  is:  106.78678832331099
Stock with Sentiment Experiment  2  for MA : T3 the RMSE is:  10.333769318274479
Stock with Sentiment Experiment  2  for MA : T3 the MAE is:  8.334873887974807
Stock with Sentiment Experiment  2  for MA : TEMA the MSE  is:  78.66857412996737
Stock with Sentiment Experiment  2  for MA : TEMA the RMSE is:  8.86953066007257
Stock with Sentiment Experiment  2  for MA : TEMA the MAE is:  7.566655356961055
Stock with Sentiment Experiment  3  for MA : SMA the MSE  is:  39.71707698339657
Stock with Sentiment Experiment  3  for MA : SMA the RMSE is:  6.302148600548591
Stock with Sentiment Experiment  3  for MA : SMA the MAE is:  5.205453851116191
Stock with Sentiment Experiment  3  for MA : EMA the MSE  is:  18.546398408821787
Stock with Sentiment Experiment  3  for MA : EMA the RMSE is:  4.306552961339474
Stock with Sentiment Experiment  3  for MA : EMA the MAE is:  3.4160340524918316
Stock with Sentiment Experiment  3  for MA : WMA the MSE  is:  100.42411916020163
Stock with Sentiment Experiment  3  for MA : WMA the RMSE is:  10.021183520932126
Stock with Sentiment Experiment  3  for MA : WMA the MAE is:  8.070563561893302
Stock with Sentiment Experiment  3  for MA : DEMA the MSE  is:  42.724210554814
Stock with Sentiment Experiment  3  for MA : DEMA the RMSE is:  6.536375949623308
Stock with Sentiment Experiment  3  for MA : DEMA the MAE is:  5.453197818871469
Stock with Sentiment Experiment  3  for MA : KAMA the MSE  is:  23.65848136204503
Stock with Sentiment Experiment  3  for MA : KAMA the RMSE is:  4.863998495275778
Stock with Sentiment Experiment  3  for MA : KAMA the MAE is:  3.972687129543795
Stock with Sentiment Experiment  3  for MA : MIDPOINT the MSE  is:  21.13528196098577
Stock with Sentiment Experiment  3  for MA : MIDPOINT the RMSE is:  4.5973124715409295
Stock with Sentiment Experiment  3  for MA : MIDPOINT the MAE is:  3.8079919461917573
Stock with Sentiment Experiment  3  for MA : T3 the MSE  is:  101.89511494721017
Stock with Sentiment Experiment  3  for MA : T3 the RMSE is:  10.09431101894578
Stock with Sentiment Experiment  3  for MA : T3 the MAE is:  8.111172771475383
Stock with Sentiment Experiment  3  for MA : TEMA the MSE  is:  10.999022660597852
Stock with Sentiment Experiment  3  for MA : TEMA the RMSE is:  3.316477447623887
Stock with Sentiment Experiment  3  for MA : TEMA the MAE is:  2.655228978622383
Stock with Sentiment Experiment  4  for MA : SMA the MSE  is:  25.081351909450348
Stock with Sentiment Experiment  4  for MA : SMA the RMSE is:  5.008128583557969
Stock with Sentiment Experiment  4  for MA : SMA the MAE is:  3.9377037384058786
Stock with Sentiment Experiment  4  for MA : EMA the MSE  is:  37.86506779592453
Stock with Sentiment Experiment  4  for MA : EMA the RMSE is:  6.153459823215273
Stock with Sentiment Experiment  4  for MA : EMA the MAE is:  4.830217084369749
Stock with Sentiment Experiment  4  for MA : WMA the MSE  is:  53.37812690087662
Stock with Sentiment Experiment  4  for MA : WMA the RMSE is:  7.306033595657539
Stock with Sentiment Experiment  4  for MA : WMA the MAE is:  5.95316326588041
Stock with Sentiment Experiment  4  for MA : DEMA the MSE  is:  133.19678136227444
Stock with Sentiment Experiment  4  for MA : DEMA the RMSE is:  11.541090995320783
Stock with Sentiment Experiment  4  for MA : DEMA the MAE is:  10.29859546777107
Stock with Sentiment Experiment  4  for MA : KAMA the MSE  is:  20.693935177088164
Stock with Sentiment Experiment  4  for MA : KAMA the RMSE is:  4.549058713304123
Stock with Sentiment Experiment  4  for MA : KAMA the MAE is:  3.6577262429810227
Stock with Sentiment Experiment  4  for MA : MIDPOINT the MSE  is:  18.24421544500263
Stock with Sentiment Experiment  4  for MA : MIDPOINT the RMSE is:  4.271324788049093
Stock with Sentiment Experiment  4  for MA : MIDPOINT the MAE is:  3.3887721441386436
Stock with Sentiment Experiment  4  for MA : T3 the MSE  is:  74.9216743993189
Stock with Sentiment Experiment  4  for MA : T3 the RMSE is:  8.655730725901707
Stock with Sentiment Experiment  4  for MA : T3 the MAE is:  7.03412901576443
Stock with Sentiment Experiment  4  for MA : TEMA the MSE  is:  33.63623508699449
Stock with Sentiment Experiment  4  for MA : TEMA the RMSE is:  5.799675429452453
Stock with Sentiment Experiment  4  for MA : TEMA the MAE is:  5.291282498785388
Stock with Sentiment Experiment  5  for MA : SMA the MSE  is:  22.905457367129987
Stock with Sentiment Experiment  5  for MA : SMA the RMSE is:  4.785964622427749
Stock with Sentiment Experiment  5  for MA : SMA the MAE is:  4.003866138329542
Stock with Sentiment Experiment  5  for MA : EMA the MSE  is:  24.329194491765858
Stock with Sentiment Experiment  5  for MA : EMA the RMSE is:  4.932463328983385
Stock with Sentiment Experiment  5  for MA : EMA the MAE is:  4.099346553838579
Stock with Sentiment Experiment  5  for MA : WMA the MSE  is:  26.731339639305343
Stock with Sentiment Experiment  5  for MA : WMA the RMSE is:  5.170235936522176
Stock with Sentiment Experiment  5  for MA : WMA the MAE is:  4.142288801040536
Stock with Sentiment Experiment  5  for MA : DEMA the MSE  is:  20.555600166870196
Stock with Sentiment Experiment  5  for MA : DEMA the RMSE is:  4.53382842274277
Stock with Sentiment Experiment  5  for MA : DEMA the MAE is:  3.6522177332314283
Stock with Sentiment Experiment  5  for MA : KAMA the MSE  is:  36.658985139129086
Stock with Sentiment Experiment  5  for MA : KAMA the RMSE is:  6.054666393710646
Stock with Sentiment Experiment  5  for MA : KAMA the MAE is:  4.91375972579294
Stock with Sentiment Experiment  5  for MA : MIDPOINT the MSE  is:  46.211728483578064
Stock with Sentiment Experiment  5  for MA : MIDPOINT the RMSE is:  6.797920894183608
Stock with Sentiment Experiment  5  for MA : MIDPOINT the MAE is:  5.510818514624332
Stock with Sentiment Experiment  5  for MA : T3 the MSE  is:  39.422414094648474
Stock with Sentiment Experiment  5  for MA : T3 the RMSE is:  6.278727107833918
Stock with Sentiment Experiment  5  for MA : T3 the MAE is:  5.177910469752962
Stock with Sentiment Experiment  5  for MA : TEMA the MSE  is:  27.219221961342864
Stock with Sentiment Experiment  5  for MA : TEMA the RMSE is:  5.217204420122223
Stock with Sentiment Experiment  5  for MA : TEMA the MAE is:  4.028826161355838
Stock with Sentiment Experiment  6  for MA : SMA the MSE  is:  109.05489356902385
Stock with Sentiment Experiment  6  for MA : SMA the RMSE is:  10.442935103170173
Stock with Sentiment Experiment  6  for MA : SMA the MAE is:  8.732625329283675
Stock with Sentiment Experiment  6  for MA : EMA the MSE  is:  73.04142380966745
Stock with Sentiment Experiment  6  for MA : EMA the RMSE is:  8.546427546622475
Stock with Sentiment Experiment  6  for MA : EMA the MAE is:  7.099244401385842
Stock with Sentiment Experiment  6  for MA : WMA the MSE  is:  84.94978866341171
Stock with Sentiment Experiment  6  for MA : WMA the RMSE is:  9.21682096296829
Stock with Sentiment Experiment  6  for MA : WMA the MAE is:  7.490547440692417
Stock with Sentiment Experiment  6  for MA : DEMA the MSE  is:  151.61070955572364
Stock with Sentiment Experiment  6  for MA : DEMA the RMSE is:  12.313030072070955
Stock with Sentiment Experiment  6  for MA : DEMA the MAE is:  11.085595013418024
Stock with Sentiment Experiment  6  for MA : KAMA the MSE  is:  80.0014001509936
Stock with Sentiment Experiment  6  for MA : KAMA the RMSE is:  8.944350180476702
Stock with Sentiment Experiment  6  for MA : KAMA the MAE is:  7.358601729961256
Stock with Sentiment Experiment  6  for MA : MIDPOINT the MSE  is:  61.18379984959283
Stock with Sentiment Experiment  6  for MA : MIDPOINT the RMSE is:  7.822007405365507
Stock with Sentiment Experiment  6  for MA : MIDPOINT the MAE is:  6.441728946960992
Stock with Sentiment Experiment  6  for MA : T3 the MSE  is:  110.56518298054853
Stock with Sentiment Experiment  6  for MA : T3 the RMSE is:  10.514998001927937
Stock with Sentiment Experiment  6  for MA : T3 the MAE is:  8.473394546481362
Stock with Sentiment Experiment  6  for MA : TEMA the MSE  is:  69.56753550271695
Stock with Sentiment Experiment  6  for MA : TEMA the RMSE is:  8.340715527022663
Stock with Sentiment Experiment  6  for MA : TEMA the MAE is:  7.185876850367952
Stock with Sentiment Experiment  7  for MA : SMA the MSE  is:  133.6684179910327
Stock with Sentiment Experiment  7  for MA : SMA the RMSE is:  11.561505870388714
Stock with Sentiment Experiment  7  for MA : SMA the MAE is:  10.289389775089397
Stock with Sentiment Experiment  7  for MA : EMA the MSE  is:  34.52931938659473
Stock with Sentiment Experiment  7  for MA : EMA the RMSE is:  5.876165364129462
Stock with Sentiment Experiment  7  for MA : EMA the MAE is:  4.852473639818076
Stock with Sentiment Experiment  7  for MA : WMA the MSE  is:  36.74468487644727
Stock with Sentiment Experiment  7  for MA : WMA the RMSE is:  6.061739426637149
Stock with Sentiment Experiment  7  for MA : WMA the MAE is:  4.85767480758183
Stock with Sentiment Experiment  7  for MA : DEMA the MSE  is:  87.45802496937279
Stock with Sentiment Experiment  7  for MA : DEMA the RMSE is:  9.351899538028238
Stock with Sentiment Experiment  7  for MA : DEMA the MAE is:  8.239361009856534
Stock with Sentiment Experiment  7  for MA : KAMA the MSE  is:  55.655998887145394
Stock with Sentiment Experiment  7  for MA : KAMA the RMSE is:  7.460294825752223
Stock with Sentiment Experiment  7  for MA : KAMA the MAE is:  6.325008398714769
Stock with Sentiment Experiment  7  for MA : MIDPOINT the MSE  is:  28.58496814832563
Stock with Sentiment Experiment  7  for MA : MIDPOINT the RMSE is:  5.346491199686541
Stock with Sentiment Experiment  7  for MA : MIDPOINT the MAE is:  4.412567377496203
Stock with Sentiment Experiment  7  for MA : T3 the MSE  is:  127.72960460138046
Stock with Sentiment Experiment  7  for MA : T3 the RMSE is:  11.301752280128088
Stock with Sentiment Experiment  7  for MA : T3 the MAE is:  9.16046603172084
Stock with Sentiment Experiment  7  for MA : TEMA the MSE  is:  34.19976389234231
Stock with Sentiment Experiment  7  for MA : TEMA the RMSE is:  5.8480564200717415
Stock with Sentiment Experiment  7  for MA : TEMA the MAE is:  5.017065986818192
Stock with Sentiment Experiment  8  for MA : SMA the MSE  is:  23.82392201920814
Stock with Sentiment Experiment  8  for MA : SMA the RMSE is:  4.880975519218278
Stock with Sentiment Experiment  8  for MA : SMA the MAE is:  3.848681389687446
Stock with Sentiment Experiment  8  for MA : EMA the MSE  is:  36.51946703896653
Stock with Sentiment Experiment  8  for MA : EMA the RMSE is:  6.043133875644865
Stock with Sentiment Experiment  8  for MA : EMA the MAE is:  4.745071953502077
Stock with Sentiment Experiment  8  for MA : WMA the MSE  is:  43.8622174139386
Stock with Sentiment Experiment  8  for MA : WMA the RMSE is:  6.622855684214975
Stock with Sentiment Experiment  8  for MA : WMA the MAE is:  5.251112001385055
Stock with Sentiment Experiment  8  for MA : DEMA the MSE  is:  142.0772798190819
Stock with Sentiment Experiment  8  for MA : DEMA the RMSE is:  11.91961743593652
Stock with Sentiment Experiment  8  for MA : DEMA the MAE is:  10.62322903609799
Stock with Sentiment Experiment  8  for MA : KAMA the MSE  is:  22.038946216129663
Stock with Sentiment Experiment  8  for MA : KAMA the RMSE is:  4.694565604625168
Stock with Sentiment Experiment  8  for MA : KAMA the MAE is:  3.7548184287573756
Stock with Sentiment Experiment  8  for MA : MIDPOINT the MSE  is:  18.4805779796863
Stock with Sentiment Experiment  8  for MA : MIDPOINT the RMSE is:  4.298904276636815
Stock with Sentiment Experiment  8  for MA : MIDPOINT the MAE is:  3.422385888494848
Stock with Sentiment Experiment  8  for MA : T3 the MSE  is:  77.67949047095865
Stock with Sentiment Experiment  8  for MA : T3 the RMSE is:  8.813596908808496
Stock with Sentiment Experiment  8  for MA : T3 the MAE is:  7.190279750268759
Stock with Sentiment Experiment  8  for MA : TEMA the MSE  is:  31.990355699152573
Stock with Sentiment Experiment  8  for MA : TEMA the RMSE is:  5.656001741438255
Stock with Sentiment Experiment  8  for MA : TEMA the MAE is:  5.124316314221195
In [7]:
text = 'Stock with Sentiment Trends '
simulations = [simulation1,simulation2,simulation3,simulation4,simulation5,simulation6,simulation7,simulation8]
for i,simulation in enumerate(simulations):
  for ma in simulation.keys():
    # print(text+'Experiment ',i+1,' for MA :',ma,'the MSE  is: ',simulation[ma]['final']['mse'])
    print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
    # print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
  for ma in simulation.keys():
    print(text+'Experiment ',i+1,' for MA :',ma,'the MSE  is: ',simulation[ma]['final']['mse'])
    # print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
    # print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
  for ma in simulation.keys():
    # print(text+'Experiment ',i+1,' for MA :',ma,'the MSE  is: ',simulation[ma]['final']['mse'])
    # print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
    print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
Stock with Sentiment Trends Experiment  1  for MA : SMA the RMSE is:  6.268680607174258
Stock with Sentiment Trends Experiment  1  for MA : EMA the RMSE is:  5.50005196725239
Stock with Sentiment Trends Experiment  1  for MA : WMA the RMSE is:  8.057694292448089
Stock with Sentiment Trends Experiment  1  for MA : DEMA the RMSE is:  7.279183760008
Stock with Sentiment Trends Experiment  1  for MA : KAMA the RMSE is:  5.796783486223054
Stock with Sentiment Trends Experiment  1  for MA : MIDPOINT the RMSE is:  4.768795618921396
Stock with Sentiment Trends Experiment  1  for MA : T3 the RMSE is:  8.434809584670473
Stock with Sentiment Trends Experiment  1  for MA : TEMA the RMSE is:  5.158323598407468
Stock with Sentiment Trends Experiment  1  for MA : SMA the MSE  is:  39.29635655476263
Stock with Sentiment Trends Experiment  1  for MA : EMA the MSE  is:  30.25057164247689
Stock with Sentiment Trends Experiment  1  for MA : WMA the MSE  is:  64.92643731055051
Stock with Sentiment Trends Experiment  1  for MA : DEMA the MSE  is:  52.98651621196421
Stock with Sentiment Trends Experiment  1  for MA : KAMA the MSE  is:  33.6026987861483
Stock with Sentiment Trends Experiment  1  for MA : MIDPOINT the MSE  is:  22.7414116550439
Stock with Sentiment Trends Experiment  1  for MA : T3 the MSE  is:  71.14601272964887
Stock with Sentiment Trends Experiment  1  for MA : TEMA the MSE  is:  26.608302345887367
Stock with Sentiment Trends Experiment  1  for MA : SMA the MAE is:  5.07416987111494
Stock with Sentiment Trends Experiment  1  for MA : EMA the MAE is:  4.444486270049439
Stock with Sentiment Trends Experiment  1  for MA : WMA the MAE is:  6.289003841800032
Stock with Sentiment Trends Experiment  1  for MA : DEMA the MAE is:  5.725540843661134
Stock with Sentiment Trends Experiment  1  for MA : KAMA the MAE is:  4.518981487962124
Stock with Sentiment Trends Experiment  1  for MA : MIDPOINT the MAE is:  3.944458615157319
Stock with Sentiment Trends Experiment  1  for MA : T3 the MAE is:  6.848574357624394
Stock with Sentiment Trends Experiment  1  for MA : TEMA the MAE is:  4.336839144940602
Stock with Sentiment Trends Experiment  2  for MA : SMA the RMSE is:  6.3146847095225125
Stock with Sentiment Trends Experiment  2  for MA : EMA the RMSE is:  8.584515095821342
Stock with Sentiment Trends Experiment  2  for MA : WMA the RMSE is:  8.76939754681257
Stock with Sentiment Trends Experiment  2  for MA : DEMA the RMSE is:  11.448153288374908
Stock with Sentiment Trends Experiment  2  for MA : KAMA the RMSE is:  6.49063413418937
Stock with Sentiment Trends Experiment  2  for MA : MIDPOINT the RMSE is:  10.135265684380567
Stock with Sentiment Trends Experiment  2  for MA : T3 the RMSE is:  10.333769318274479
Stock with Sentiment Trends Experiment  2  for MA : TEMA the RMSE is:  8.86953066007257
Stock with Sentiment Trends Experiment  2  for MA : SMA the MSE  is:  39.87524298067742
Stock with Sentiment Trends Experiment  2  for MA : EMA the MSE  is:  73.6938994303845
Stock with Sentiment Trends Experiment  2  for MA : WMA the MSE  is:  76.90233333404232
Stock with Sentiment Trends Experiment  2  for MA : DEMA the MSE  is:  131.0602137141292
Stock with Sentiment Trends Experiment  2  for MA : KAMA the MSE  is:  42.128331463904196
Stock with Sentiment Trends Experiment  2  for MA : MIDPOINT the MSE  is:  102.72361049298229
Stock with Sentiment Trends Experiment  2  for MA : T3 the MSE  is:  106.78678832331099
Stock with Sentiment Trends Experiment  2  for MA : TEMA the MSE  is:  78.66857412996737
Stock with Sentiment Trends Experiment  2  for MA : SMA the MAE is:  5.088204561858354
Stock with Sentiment Trends Experiment  2  for MA : EMA the MAE is:  7.207683534137507
Stock with Sentiment Trends Experiment  2  for MA : WMA the MAE is:  7.121360770950225
Stock with Sentiment Trends Experiment  2  for MA : DEMA the MAE is:  10.329401784343453
Stock with Sentiment Trends Experiment  2  for MA : KAMA the MAE is:  5.225311201416733
Stock with Sentiment Trends Experiment  2  for MA : MIDPOINT the MAE is:  8.372939983805384
Stock with Sentiment Trends Experiment  2  for MA : T3 the MAE is:  8.334873887974807
Stock with Sentiment Trends Experiment  2  for MA : TEMA the MAE is:  7.566655356961055
Stock with Sentiment Trends Experiment  3  for MA : SMA the RMSE is:  6.302148600548591
Stock with Sentiment Trends Experiment  3  for MA : EMA the RMSE is:  4.306552961339474
Stock with Sentiment Trends Experiment  3  for MA : WMA the RMSE is:  10.021183520932126
Stock with Sentiment Trends Experiment  3  for MA : DEMA the RMSE is:  6.536375949623308
Stock with Sentiment Trends Experiment  3  for MA : KAMA the RMSE is:  4.863998495275778
Stock with Sentiment Trends Experiment  3  for MA : MIDPOINT the RMSE is:  4.5973124715409295
Stock with Sentiment Trends Experiment  3  for MA : T3 the RMSE is:  10.09431101894578
Stock with Sentiment Trends Experiment  3  for MA : TEMA the RMSE is:  3.316477447623887
Stock with Sentiment Trends Experiment  3  for MA : SMA the MSE  is:  39.71707698339657
Stock with Sentiment Trends Experiment  3  for MA : EMA the MSE  is:  18.546398408821787
Stock with Sentiment Trends Experiment  3  for MA : WMA the MSE  is:  100.42411916020163
Stock with Sentiment Trends Experiment  3  for MA : DEMA the MSE  is:  42.724210554814
Stock with Sentiment Trends Experiment  3  for MA : KAMA the MSE  is:  23.65848136204503
Stock with Sentiment Trends Experiment  3  for MA : MIDPOINT the MSE  is:  21.13528196098577
Stock with Sentiment Trends Experiment  3  for MA : T3 the MSE  is:  101.89511494721017
Stock with Sentiment Trends Experiment  3  for MA : TEMA the MSE  is:  10.999022660597852
Stock with Sentiment Trends Experiment  3  for MA : SMA the MAE is:  5.205453851116191
Stock with Sentiment Trends Experiment  3  for MA : EMA the MAE is:  3.4160340524918316
Stock with Sentiment Trends Experiment  3  for MA : WMA the MAE is:  8.070563561893302
Stock with Sentiment Trends Experiment  3  for MA : DEMA the MAE is:  5.453197818871469
Stock with Sentiment Trends Experiment  3  for MA : KAMA the MAE is:  3.972687129543795
Stock with Sentiment Trends Experiment  3  for MA : MIDPOINT the MAE is:  3.8079919461917573
Stock with Sentiment Trends Experiment  3  for MA : T3 the MAE is:  8.111172771475383
Stock with Sentiment Trends Experiment  3  for MA : TEMA the MAE is:  2.655228978622383
Stock with Sentiment Trends Experiment  4  for MA : SMA the RMSE is:  5.008128583557969
Stock with Sentiment Trends Experiment  4  for MA : EMA the RMSE is:  6.153459823215273
Stock with Sentiment Trends Experiment  4  for MA : WMA the RMSE is:  7.306033595657539
Stock with Sentiment Trends Experiment  4  for MA : DEMA the RMSE is:  11.541090995320783
Stock with Sentiment Trends Experiment  4  for MA : KAMA the RMSE is:  4.549058713304123
Stock with Sentiment Trends Experiment  4  for MA : MIDPOINT the RMSE is:  4.271324788049093
Stock with Sentiment Trends Experiment  4  for MA : T3 the RMSE is:  8.655730725901707
Stock with Sentiment Trends Experiment  4  for MA : TEMA the RMSE is:  5.799675429452453
Stock with Sentiment Trends Experiment  4  for MA : SMA the MSE  is:  25.081351909450348
Stock with Sentiment Trends Experiment  4  for MA : EMA the MSE  is:  37.86506779592453
Stock with Sentiment Trends Experiment  4  for MA : WMA the MSE  is:  53.37812690087662
Stock with Sentiment Trends Experiment  4  for MA : DEMA the MSE  is:  133.19678136227444
Stock with Sentiment Trends Experiment  4  for MA : KAMA the MSE  is:  20.693935177088164
Stock with Sentiment Trends Experiment  4  for MA : MIDPOINT the MSE  is:  18.24421544500263
Stock with Sentiment Trends Experiment  4  for MA : T3 the MSE  is:  74.9216743993189
Stock with Sentiment Trends Experiment  4  for MA : TEMA the MSE  is:  33.63623508699449
Stock with Sentiment Trends Experiment  4  for MA : SMA the MAE is:  3.9377037384058786
Stock with Sentiment Trends Experiment  4  for MA : EMA the MAE is:  4.830217084369749
Stock with Sentiment Trends Experiment  4  for MA : WMA the MAE is:  5.95316326588041
Stock with Sentiment Trends Experiment  4  for MA : DEMA the MAE is:  10.29859546777107
Stock with Sentiment Trends Experiment  4  for MA : KAMA the MAE is:  3.6577262429810227
Stock with Sentiment Trends Experiment  4  for MA : MIDPOINT the MAE is:  3.3887721441386436
Stock with Sentiment Trends Experiment  4  for MA : T3 the MAE is:  7.03412901576443
Stock with Sentiment Trends Experiment  4  for MA : TEMA the MAE is:  5.291282498785388
Stock with Sentiment Trends Experiment  5  for MA : SMA the RMSE is:  4.785964622427749
Stock with Sentiment Trends Experiment  5  for MA : EMA the RMSE is:  4.932463328983385
Stock with Sentiment Trends Experiment  5  for MA : WMA the RMSE is:  5.170235936522176
Stock with Sentiment Trends Experiment  5  for MA : DEMA the RMSE is:  4.53382842274277
Stock with Sentiment Trends Experiment  5  for MA : KAMA the RMSE is:  6.054666393710646
Stock with Sentiment Trends Experiment  5  for MA : MIDPOINT the RMSE is:  6.797920894183608
Stock with Sentiment Trends Experiment  5  for MA : T3 the RMSE is:  6.278727107833918
Stock with Sentiment Trends Experiment  5  for MA : TEMA the RMSE is:  5.217204420122223
Stock with Sentiment Trends Experiment  5  for MA : SMA the MSE  is:  22.905457367129987
Stock with Sentiment Trends Experiment  5  for MA : EMA the MSE  is:  24.329194491765858
Stock with Sentiment Trends Experiment  5  for MA : WMA the MSE  is:  26.731339639305343
Stock with Sentiment Trends Experiment  5  for MA : DEMA the MSE  is:  20.555600166870196
Stock with Sentiment Trends Experiment  5  for MA : KAMA the MSE  is:  36.658985139129086
Stock with Sentiment Trends Experiment  5  for MA : MIDPOINT the MSE  is:  46.211728483578064
Stock with Sentiment Trends Experiment  5  for MA : T3 the MSE  is:  39.422414094648474
Stock with Sentiment Trends Experiment  5  for MA : TEMA the MSE  is:  27.219221961342864
Stock with Sentiment Trends Experiment  5  for MA : SMA the MAE is:  4.003866138329542
Stock with Sentiment Trends Experiment  5  for MA : EMA the MAE is:  4.099346553838579
Stock with Sentiment Trends Experiment  5  for MA : WMA the MAE is:  4.142288801040536
Stock with Sentiment Trends Experiment  5  for MA : DEMA the MAE is:  3.6522177332314283
Stock with Sentiment Trends Experiment  5  for MA : KAMA the MAE is:  4.91375972579294
Stock with Sentiment Trends Experiment  5  for MA : MIDPOINT the MAE is:  5.510818514624332
Stock with Sentiment Trends Experiment  5  for MA : T3 the MAE is:  5.177910469752962
Stock with Sentiment Trends Experiment  5  for MA : TEMA the MAE is:  4.028826161355838
Stock with Sentiment Trends Experiment  6  for MA : SMA the RMSE is:  10.442935103170173
Stock with Sentiment Trends Experiment  6  for MA : EMA the RMSE is:  8.546427546622475
Stock with Sentiment Trends Experiment  6  for MA : WMA the RMSE is:  9.21682096296829
Stock with Sentiment Trends Experiment  6  for MA : DEMA the RMSE is:  12.313030072070955
Stock with Sentiment Trends Experiment  6  for MA : KAMA the RMSE is:  8.944350180476702
Stock with Sentiment Trends Experiment  6  for MA : MIDPOINT the RMSE is:  7.822007405365507
Stock with Sentiment Trends Experiment  6  for MA : T3 the RMSE is:  10.514998001927937
Stock with Sentiment Trends Experiment  6  for MA : TEMA the RMSE is:  8.340715527022663
Stock with Sentiment Trends Experiment  6  for MA : SMA the MSE  is:  109.05489356902385
Stock with Sentiment Trends Experiment  6  for MA : EMA the MSE  is:  73.04142380966745
Stock with Sentiment Trends Experiment  6  for MA : WMA the MSE  is:  84.94978866341171
Stock with Sentiment Trends Experiment  6  for MA : DEMA the MSE  is:  151.61070955572364
Stock with Sentiment Trends Experiment  6  for MA : KAMA the MSE  is:  80.0014001509936
Stock with Sentiment Trends Experiment  6  for MA : MIDPOINT the MSE  is:  61.18379984959283
Stock with Sentiment Trends Experiment  6  for MA : T3 the MSE  is:  110.56518298054853
Stock with Sentiment Trends Experiment  6  for MA : TEMA the MSE  is:  69.56753550271695
Stock with Sentiment Trends Experiment  6  for MA : SMA the MAE is:  8.732625329283675
Stock with Sentiment Trends Experiment  6  for MA : EMA the MAE is:  7.099244401385842
Stock with Sentiment Trends Experiment  6  for MA : WMA the MAE is:  7.490547440692417
Stock with Sentiment Trends Experiment  6  for MA : DEMA the MAE is:  11.085595013418024
Stock with Sentiment Trends Experiment  6  for MA : KAMA the MAE is:  7.358601729961256
Stock with Sentiment Trends Experiment  6  for MA : MIDPOINT the MAE is:  6.441728946960992
Stock with Sentiment Trends Experiment  6  for MA : T3 the MAE is:  8.473394546481362
Stock with Sentiment Trends Experiment  6  for MA : TEMA the MAE is:  7.185876850367952
Stock with Sentiment Trends Experiment  7  for MA : SMA the RMSE is:  11.561505870388714
Stock with Sentiment Trends Experiment  7  for MA : EMA the RMSE is:  5.876165364129462
Stock with Sentiment Trends Experiment  7  for MA : WMA the RMSE is:  6.061739426637149
Stock with Sentiment Trends Experiment  7  for MA : DEMA the RMSE is:  9.351899538028238
Stock with Sentiment Trends Experiment  7  for MA : KAMA the RMSE is:  7.460294825752223
Stock with Sentiment Trends Experiment  7  for MA : MIDPOINT the RMSE is:  5.346491199686541
Stock with Sentiment Trends Experiment  7  for MA : T3 the RMSE is:  11.301752280128088
Stock with Sentiment Trends Experiment  7  for MA : TEMA the RMSE is:  5.8480564200717415
Stock with Sentiment Trends Experiment  7  for MA : SMA the MSE  is:  133.6684179910327
Stock with Sentiment Trends Experiment  7  for MA : EMA the MSE  is:  34.52931938659473
Stock with Sentiment Trends Experiment  7  for MA : WMA the MSE  is:  36.74468487644727
Stock with Sentiment Trends Experiment  7  for MA : DEMA the MSE  is:  87.45802496937279
Stock with Sentiment Trends Experiment  7  for MA : KAMA the MSE  is:  55.655998887145394
Stock with Sentiment Trends Experiment  7  for MA : MIDPOINT the MSE  is:  28.58496814832563
Stock with Sentiment Trends Experiment  7  for MA : T3 the MSE  is:  127.72960460138046
Stock with Sentiment Trends Experiment  7  for MA : TEMA the MSE  is:  34.19976389234231
Stock with Sentiment Trends Experiment  7  for MA : SMA the MAE is:  10.289389775089397
Stock with Sentiment Trends Experiment  7  for MA : EMA the MAE is:  4.852473639818076
Stock with Sentiment Trends Experiment  7  for MA : WMA the MAE is:  4.85767480758183
Stock with Sentiment Trends Experiment  7  for MA : DEMA the MAE is:  8.239361009856534
Stock with Sentiment Trends Experiment  7  for MA : KAMA the MAE is:  6.325008398714769
Stock with Sentiment Trends Experiment  7  for MA : MIDPOINT the MAE is:  4.412567377496203
Stock with Sentiment Trends Experiment  7  for MA : T3 the MAE is:  9.16046603172084
Stock with Sentiment Trends Experiment  7  for MA : TEMA the MAE is:  5.017065986818192
Stock with Sentiment Trends Experiment  8  for MA : SMA the RMSE is:  4.880975519218278
Stock with Sentiment Trends Experiment  8  for MA : EMA the RMSE is:  6.043133875644865
Stock with Sentiment Trends Experiment  8  for MA : WMA the RMSE is:  6.622855684214975
Stock with Sentiment Trends Experiment  8  for MA : DEMA the RMSE is:  11.91961743593652
Stock with Sentiment Trends Experiment  8  for MA : KAMA the RMSE is:  4.694565604625168
Stock with Sentiment Trends Experiment  8  for MA : MIDPOINT the RMSE is:  4.298904276636815
Stock with Sentiment Trends Experiment  8  for MA : T3 the RMSE is:  8.813596908808496
Stock with Sentiment Trends Experiment  8  for MA : TEMA the RMSE is:  5.656001741438255
Stock with Sentiment Trends Experiment  8  for MA : SMA the MSE  is:  23.82392201920814
Stock with Sentiment Trends Experiment  8  for MA : EMA the MSE  is:  36.51946703896653
Stock with Sentiment Trends Experiment  8  for MA : WMA the MSE  is:  43.8622174139386
Stock with Sentiment Trends Experiment  8  for MA : DEMA the MSE  is:  142.0772798190819
Stock with Sentiment Trends Experiment  8  for MA : KAMA the MSE  is:  22.038946216129663
Stock with Sentiment Trends Experiment  8  for MA : MIDPOINT the MSE  is:  18.4805779796863
Stock with Sentiment Trends Experiment  8  for MA : T3 the MSE  is:  77.67949047095865
Stock with Sentiment Trends Experiment  8  for MA : TEMA the MSE  is:  31.990355699152573
Stock with Sentiment Trends Experiment  8  for MA : SMA the MAE is:  3.848681389687446
Stock with Sentiment Trends Experiment  8  for MA : EMA the MAE is:  4.745071953502077
Stock with Sentiment Trends Experiment  8  for MA : WMA the MAE is:  5.251112001385055
Stock with Sentiment Trends Experiment  8  for MA : DEMA the MAE is:  10.62322903609799
Stock with Sentiment Trends Experiment  8  for MA : KAMA the MAE is:  3.7548184287573756
Stock with Sentiment Trends Experiment  8  for MA : MIDPOINT the MAE is:  3.422385888494848
Stock with Sentiment Trends Experiment  8  for MA : T3 the MAE is:  7.190279750268759
Stock with Sentiment Trends Experiment  8  for MA : TEMA the MAE is:  5.124316314221195

Create HTML

In [2]:
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
In [115]:
cd ..
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid/Outputs
In [5]:
cd drive/MyDrive/Stock price prediction/Archana - LSTM Hybrid
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid
In [ ]:
%%shell
jupyter nbconvert --to html LSTM_Hybrid_using_TA_LIB_BullBear.ipynb